What is Threading?
Threading in C# allows you to run multiple operations concurrently, utilizing CPU cores effectively. It helps to utilize the power of hardware and operating systems to increase the performance of our software.
C# provides the System.Threading
namespace for working with threads. The operating system allocates processor time to threads. A component in the operating system called Schedular manages the threads independently.
Every process in Windows OS can have one or more threads. Now, you might ask what is a process?
A process is an execution unit that runs when you use any application in Windows OS. For example, if you have Microsoft Edge or any other browser installed then you open your browser and go to Task Manager’s Process tab to see the running processes for the currently running applications.
Tip: Press SHIFT + CTRL + ESC to open Task Manager.
A picture is worth a thousand words.
Let’s look at an example now.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
using System; using System.Threading; class Program { static void Main() { Thread thread = new Thread(PrintNumbers); thread.Start(); // Start the thread PrintNumbers(); // Run on the main thread } static void PrintNumbers() { for (int i = 1; i <= 5; i++) { Console.WriteLine($"Thread ID: {Thread.CurrentThread.ManagedThreadId}, Number: {i}"); Thread.Sleep(500); // Simulate work } } } |
In the example above, you are creating a thread with the Thread class. Whatever the work you want the thread to perform you need to pass it to its constructor. For example, in this case, we are passing the PrintNumbers
method because we want our thread to execute this method.
thread.Start()
starts the thread which then executes the PrintNumbers
method parallel to the main
thread.
In the Main
method body, you can see we are calling the PrintNumbers
method directly. This means the PrintNumbers
method will be called on the Main
thread as well.
Therefore, the PrintNumbers
method gets called on both threads (the main thread and the newly created thread)
Thread.CurrentThread.ManagedThreadId
gives the unique ID of the thread running this method. This allows you to distinguish between the main
thread and the new
thread.
Thread.Sleep(500)
pauses the thread for 500 milliseconds to simulate doing some time-consuming task.
Each loop iteration prints the thread ID and the current number, allowing you to see how both threads run concurrently.
Both threads won’t run in any particular order. Every time you run this program; you’ll see different outputs. The Operating System decides which thread will execute at which time. It’s outside of developer’s control.
Whenever you run any C# program, it runs on the Main thread. Any new threads that you create are called as Worker Threads. Threading in C# let you create such worker threads, execute what you want to execute, and eventually dispose them off. The main thread remains active as long as the application is running.
For better readability, you can name your threads whatever you want while initializing the thread as shown below.
1 2 3 4 |
var newThread = new Thread(PrintNumbers) { Name = "My Thread" }; |
How to Manage Shared Resources in Multi-Threading?
As we know the .NET CLR delegates the multi-threading tasks to the operating system (OS). The OS then uses the Thread Schedular to perform multi-threading.
Each thread has its own local memory stack. The local memory doesn’t interfere between multiple threads. For example, in the example above, no matter how many threads call PrintNumbers
method, each thread will have its own value of i
variable to keep track of the loop.
Now the interesting question is when there are multiple threads, what happens to the shared data (data that multiple threads can access)? How OS decides how multiple threads access the shared data?
Let’s answer this question.
But before that we need to understand the multi-threading in single processor computers and multi-processor computers.
Single-processor computers use something called time slicing to allocate execution time to multiple threads. It means the Thread Scheduler context switch the threads to let the threads perform their work. However, this can result in preempted threads i.e. a thread whose execution is disturbed due to external reasons. A thread can’t control when it’ll be preempted because Thread Schedular decides which thread will get how much time for execution. In simple terms, a single-processor computer can have one active thread at a time.
Whereas, in multi-processor computers multiple threads get execution time truly parallelly. For example, if there are 4 processors in a computer then there can be 4 active threads at any given time. In this situation, accessing shared resources (variables, properties, etc.) can be bit tricky.
To share common data among threads, we use something called Locks. A Lock is a static object that can limit the access to a certain part of your code to single thread at a time i.e. any block of code wrapped with Lock can be accessed by a single thread only.
Let’s see this in action.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
namespace Constoso.ConsoleApp; public class LocksDemo { private static bool _isCompleted; private static readonly object Lock = new object(); public LocksDemo() { var thread = new Thread(PrintHelloWorld) { Name = "My worker thread" }; thread.Start(); PrintHelloWorld(); } private static void PrintHelloWorld() { lock (Lock) { if (!_isCompleted) { Console.WriteLine(Thread.CurrentThread.Name); Console.WriteLine("Hello world!"); _isCompleted = true; } } } } |
In the example code above, our aim is to print the Hello world
just once. But in the constructor, we have 2 simultaneous calls to the PrintHelloWorld
method. One using the thread.Start()
and other directly calling the PrintHelloWorld
method.
Both the Main thread and the newly created worker thread will call this method. But we want only one method to print the Hello World!
text. To achieve this, we are using two things: a boolean flag _isCompleted
and a Lock
object.
Because we have our IF
block wrapped with the lock
scope, no matter which thread calls PrintHelloWorld
method first, the lock
will ensure the other thread wait until the first thread finishes execution. During first run, the value of _isCompleted
will become true
hence when the next enter the lock
scope the IF
condition will fail and we’ll not see another instance of Hello World!
This proves that lock
statement ensured only one thread entered the scope at a time. In technical terms, the code inside the lock
scope is called critical section.
If you comment out the lock
scope, then you’ll see two Hello World!
statements print outs on the console.
As you can see, we have given the name to our worker thread My worker thread
. This shows that which thread print out the hello world message on the console. You may have to run the program multiple times to see if our worker thread gets the opportunity to reach the critical section. I had to run it 5-6 times to see this.
In the critical section, you can do some heavy operations as well, still the other thread will wait until the active one finishes the execution. To simulate this, you can add a Thread.Sleep(3000)
statement in the critical section to add a 3 second delay.
Creating a thread with its local memory stack costs CPU load. Therefore, creating, destroying, and recreating threads is expensive.
Wouldn’t it be better if a thread can be created once and then reused again and again for various tasks? Yes, that’s possible using a Thread Pool. Which you are going to learn in next section.
What is Thread Pool in C#?
A thread pool in C# is a collection of background worker threads that are managed by the .NET runtime. These threads are used to perform tasks in the background, such as executing asynchronous operations, without the overhead of creating and destroying threads manually.
A Background thread is identical to Main thread. The only difference is if the Main thread dies the Background threads die as well. If a Worker thread has
IsBackground
property true, then it becomes Background thread. The Background threads do not manage execution environment. The Main thread does.
Thread pool controls the limit of threads. For example, if the limit is 10 then only 10 threads can remain active in the pool at any given time. If there are more threads, then they will wait for the current threads to finish their tasks.
You can use Thread.CurrentThread.IsThreadPoolThread
property to check if a thread belongs to Thread Pool or not.
Benefits of Thread Pool.
- Efficient Resource Management: Creating and destroying threads is resource-intensive. The thread pool reuses existing threads, reducing the overhead.
- Scalability: The thread pool can dynamically adjust the number of threads based on the workload, improving application performance.
- Simplified Programming Model: Developers can focus on the tasks rather than managing threads, making the code cleaner and easier to maintain.
Example: Using Thread Pool in C#.
Let’s create a simple example using the thread pool in C#.
- Create a Console Application: Open Visual Studio Code and create a new console application.
- Write the Code: Add the following code to your
Program.cs
file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
using System; using System.Threading; class Program { static void Main(string[] args) { Console.WriteLine("Main thread starting."); // Queue a task to the thread pool ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork), "Task 1"); ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork), "Task 2"); // Wait for user input to keep the console open Console.ReadLine(); } // Method to be executed by the thread pool static void DoWork(object state) { string taskName = (string)state; Console.WriteLine($"{taskName} starting on thread {Thread.CurrentThread.ManagedThreadId}."); // Simulate work Thread.Sleep(2000); Console.WriteLine($"{taskName} completed on thread {Thread.CurrentThread.ManagedThreadId}."); } } |
Step-by-Step Explanation
- Main Method: The
Main
method is the entry point of the application. It starts by printing a message to the console. - QueueUserWorkItem: This method queues a task to the thread pool. It takes a
WaitCallback
delegate that points to the method to be executed (DoWork
in this case) and an optional state object (task name). - DoWork Method: This method is executed by the thread pool threads. It takes an
object
parameter, which is the state passed fromQueueUserWorkItem
. The method prints the task name and the thread ID, simulates work by sleeping for 2 seconds, and then prints a completion message. - Thread.Sleep: This simulates some work being done by the thread.
- Console.ReadLine: This keeps the console window open until the user presses Enter, allowing you to see the output.
Output
When you run the application, you will see an output similar to this:
1 2 3 4 5 |
Main thread starting. Task 1 starting on thread 3. Task 2 starting on thread 4. Task 1 completed on thread 3. Task 2 completed on thread 4. |
This demonstrates that the tasks are executed on different threads from the thread pool, and the threads are reused efficiently.
Using the thread pool, You can execute background tasks without the complexity of manually managing threads, leading to more efficient and scalable applications.
What is the Join method in threading?
The Join
method in C# is used in threading to make one thread wait until another thread has finished its execution. This is useful when you need to ensure that a particular thread completes its work before the main thread or another thread continues.
Here’s a simple example to illustrate how the Join
method works:
- Create a new thread that performs some work.
- Use the
Join
method to make the Main thread wait until the new thread completes its execution.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
using System; using System.Threading; class Program { static void Main() { // Create a new thread that runs the DoWork method Thread workerThread = new Thread(DoWork); // Start the worker thread workerThread.Start(); // Use the Join method to wait for the worker thread to finish workerThread.Join(); // This line will only be executed after the worker thread has finished Console.WriteLine("Worker thread has finished. Main thread continues."); } static void DoWork() { // Simulate some work by sleeping for 5 seconds Console.WriteLine("Worker thread started."); Thread.Sleep(5000); Console.WriteLine("Worker thread completed."); } } |
- Creating a Thread: We create a new thread
workerThread
that will run theDoWork
method. - Starting the Thread: We start the
workerThread
using theStart
method. - Using
Join
: TheJoin
method is called onworkerThread
. This makes the main thread (the thread that runs theMain
method) wait untilworkerThread
has finished executing. - Simulating Work: Inside the
DoWork
method, we simulate some work by making the thread sleep for 5 seconds usingThread.Sleep(5000)
. - Completion: After the
workerThread
completes its work, the main thread resumes and prints a message indicating that the worker thread has finished.
In this example, you can see how you can use the Join
method to synchronize threads and ensure that one thread completes before another continues.
Exception handling in C# multi-threading.
Exception handling in C# multi-threading involves managing errors that occur within individual threads. When a thread encounters an exception, it can disrupt the execution flow and potentially cause the entire application to crash if not properly handled.
Key points to understand:
- Isolated Handling: Each thread should handle its own exceptions using
try-catch
blocks to prevent unhandled exceptions from terminating the application. - Communication: Threads can communicate exceptions back to the main thread or other threads, often using shared variables, events, or callback methods.
- Thread Safety: Ensure that exception handling does not introduce race conditions or other concurrency issues.
By effectively managing exceptions in multi-threaded applications, you can ensure that your application remains robust and can recover gracefully from errors.
Let’s use the Thread
class to demonstrate exception handling in a multi-threaded environment.
Here’s an example:
- Create a thread that performs some work and may throw an exception.
- Use a
try-catch
block within the thread’s method to catch and handle exceptions.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
using System; using System.Threading; class Program { static void Main() { // Create a new thread Thread thread = new Thread(DoWork); // Start the thread thread.Start(); // Wait for the thread to complete thread.Join(); } static void DoWork() { try { // Simulate some work that throws an exception Thread.Sleep(3000); throw new InvalidOperationException("Something went wrong in the thread!"); } catch (Exception ex) { Console.WriteLine("Exception caught in thread: " + ex.Message); } } } |
In this example:
- The
DoWork
method simulates some work, waits for 3 seconds, and throws an exception. - The
Thread
class is used to create and start a new thread that runs theDoWork
method. - The
try-catch
block within theDoWork
method catches and handles any exceptions that occur in the thread. - The
thread.Join()
method ensures the main thread waits for the worker thread to complete.
This demonstrates how to handle exceptions within individual threads using the Thread
class and try-catch
blocks.
Challenges of Using Threads.
Although threads enable us to do parallel programming in C#, but they bring their challenges as well.
- Complexity: Managing threads manually can be complex. You need to handle thread creation, synchronization, and termination explicitly.
- Resource Management: Threads consume system resources. Creating too many threads can lead to resource exhaustion.
- Error Handling: Handling exceptions in threads can be tricky, especially when you need to communicate errors back to the main thread.
- Synchronization: Ensuring thread safety and avoiding race conditions requires careful use of synchronization primitives like locks, which can lead to deadlocks if not managed correctly.
- Scalability: Manually managing threads does not scale well with increasing workloads or more complex applications.
Tasks on the other hand can address most of (if not all) the challenges.
What are Tasks?
Tasks in C# are part of the Task Parallel Library (TPL) and provide a higher-level abstraction for managing asynchronous operations. Tasks simplify the process of running code asynchronously and handling the results or exceptions.
How Tasks Solve These Challenges
- Simplified Syntax: Tasks provide a simpler and more intuitive syntax for running asynchronous operations.
- Automatic Resource Management: The TPL manages the underlying threads, optimizing resource usage and improving performance.
- Built-in Exception Handling: Tasks have built-in mechanisms for handling exceptions, making it easier to manage errors.
- Synchronization: Tasks provide methods like
ContinueWith
andawait
(with async/await keywords) to handle synchronization more easily. - Scalability: The TPL efficiently manages the thread pool, allowing applications to scale better with increasing workloads.
Benefits of Tasks
- Ease of Use: Tasks are easier to use and understand compared to manually managing threads.
- Better Resource Management: The TPL optimizes resource usage, reducing the risk of resource exhaustion.
- Improved Error Handling: Tasks provide better mechanisms for handling exceptions and propagating errors.
- Enhanced Scalability: Tasks scale better with increasing workloads, making them suitable for complex applications.
- I/O Bound Operations: Tasks enables to work with I/O bound operations efficiently such interacting with databases, external web APIs, etc.
Let’s understand the usage of Tasks vs. Threads in context of exception handling. You’ll see how Tasks can simplify your code to make your life as developer easier.
First, let’s how would you manage exception using Threads. We have discussed it in previous section. Let’s just review it quickly.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
using System; using System.Threading; class Program { static void Main() { // Create and start a thread Thread thread = new Thread(DoWork); thread.Start(); // Wait for the thread to complete thread.Join(); } static void DoWork() { try { // Simulate some work Console.WriteLine("Thread is working..."); Thread.Sleep(1000); throw new InvalidOperationException("Something went wrong in the thread!"); } catch (Exception ex) { Console.WriteLine("Exception caught in thread: " + ex.Message); } } } |
See below how using Tasks can simplify the code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
using System; using System.Threading.Tasks; class Program { static async Task Main() { try { // Create and start a task Task task = Task.Run(() => DoWork()); // Await the task to complete await task; } catch (Exception ex) { Console.WriteLine("Exception caught in task: " + ex.Message); } } static void DoWork() { // Simulate some work Console.WriteLine("Task is working..."); Task.Delay(1000).Wait(); throw new InvalidOperationException("Something went wrong in the task!"); } } |
- Using Threads: You must manually create, start, and join the thread. Exception handling is done within the thread method, and communicating errors back to the main thread can be complex.
- Using Tasks: Tasks provide a simpler syntax with
Task.Run
andawait
. Exception handling is more straightforward, and the TPL manages the underlying threads, optimizing resource usage and improving scalability.
Using Tasks, you can write cleaner, more maintainable, and scalable code, making managing asynchronous operations in your applications easier.
What is Continuation in TPL?
In the context of asynchronous programming, a continuation is a piece of code executed after a task is completed. Continuations allow you to specify what should happen next once an asynchronous operation finishes. This is particularly useful for managing complex workflows where multiple asynchronous operations must be coordinated.
Benefits of Continuations.
- Improved Readability: Continuations can make asynchronous code easier to read and maintain by avoiding deeply nested callbacks.
- Error Handling: Continuations can be used to handle errors more gracefully.
- Composability: Continuations allow you to chain multiple asynchronous operations together clearly and concisely.
Pitfalls to Avoid.
- Complexity: Overusing continuations can lead to complex and hard-to-follow code.
- Exception Handling: Ensure proper exception handling within continuations to avoid unhandled exceptions.
- Resource Management: Be cautious about resource management, especially when dealing with long-running tasks.
Here is a simple example to demonstrate the concept of continuations using the Task
class in C#:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
using System; using System.Threading.Tasks; class Program { static void Main() { Task<int> task = Task.Run(() => { // Simulate some work Task.Delay(1000).Wait(); return 42; }); task.ContinueWith(t => { // This continuation runs after the task completes Console.WriteLine("Task completed with result: " + t.Result); }); Console.WriteLine("Main method complete. Waiting for task..."); Console.ReadLine(); // Keep the console open } } |
- Task Creation: A task is created using
Task.Run
which simulates some work and returns a result. - Continuation: The
ContinueWith
method is used to specify a continuation that will run after the task is completed. This continuation simply prints the result of the task. - Main Method: The main method completes, but the console remains open to allow the task and its continuation to be completed.
What is Synchronization in Multithreading?
When working with multithreading in C#, synchronization is a crucial concept to understand. Synchronization ensures that multiple threads can safely access shared resources without causing data corruption or inconsistencies. Without proper synchronization, you might encounter issues like race conditions, deadlocks, and data corruption.
Why is Synchronization Important?
Imagine you have multiple threads trying to update a shared variable simultaneously. Without synchronization, these threads might interfere with each other, leading to unpredictable results. Synchronization mechanisms help coordinate the access to shared resources, ensuring that only one thread can modify the resource at a time.
Multiple Ways to Achieve Synchronization in C#.
C# provides several synchronization primitives to help manage access to shared resources. Let’s explore some of the most commonly used ones with beginner-friendly examples.
Lock Statement.
The lock
statement is the simplest way to achieve synchronization. It ensures that only one thread can enter a critical section of code at a time.
Let’s look at the example below to see how lock
the statement can help.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
using System; using System.Threading; class Program { private static int counter = 0; private static readonly object lockObject = new object(); static void Main() { Thread thread1 = new Thread(IncrementCounter); Thread thread2 = new Thread(IncrementCounter); thread1.Start(); thread2.Start(); thread1.Join(); thread2.Join(); Console.WriteLine("Final counter value: " + counter); } static void IncrementCounter() { for (int i = 0; i < 1000; i++) { lock (lockObject) { counter++; } } } } |
In the code above:
private static readonly object lockObject = new object();
declares a static read-only objectlockObject
used for synchronization to prevent race conditions.Thread thread1 = new Thread(IncrementCounter);
creates a new threadthread1
that will execute theIncrementCounter
method.Thread thread2 = new Thread(IncrementCounter);
creates another threadthread2
that will also execute theIncrementCounter
method.thread1.Start();
andthread2.Start();
start the execution of both threads concurrently.thread1.Join();
andthread2.Join();
block the main thread untilthread1
andthread2
complete their execution.Console.WriteLine("Final counter value: " + counter);
prints the final value of thecounter
after both threads have finished.lock (lockObject)
inIncrementCounter
method ensures that only one thread can execute the code inside thelock
block at a time, preventing race conditions by acquiring a mutual exclusion lock onlockObject
.
Monitor Class.
The Monitor
class provides more control over synchronization compared to the lock
statement. It allows you to enter and exit critical sections explicitly.
Here’s an example showing how can you use the Monitor class in your code.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
using System; using System.Threading; class Program { private static int counter = 0; private static readonly object lockObject = new object(); static void Main() { Thread thread1 = new Thread(IncrementCounter); Thread thread2 = new Thread(IncrementCounter); thread1.Start(); thread2.Start(); thread1.Join(); thread2.Join(); Console.WriteLine("Final counter value: " + counter); } static void IncrementCounter() { for (int i = 0; i < 1000; i++) { Monitor.Enter(lockObject); try { counter++; } finally { Monitor.Exit(lockObject); } } } } |
In the code above:
private static readonly object lockObject = new object();
declares a static read-only objectlockObject
used for synchronization to prevent race conditions.Thread thread1 = new Thread(IncrementCounter);
creates a new threadthread1
that will execute theIncrementCounter
method.Thread thread2 = new Thread(IncrementCounter);
creates another threadthread2
that will also execute theIncrementCounter
method.thread1.Start();
andthread2.Start();
start the execution of both threads concurrently.thread1.Join();
andthread2.Join();
block the main thread untilthread1
andthread2
complete their execution.Console.WriteLine("Final counter value: " + counter);
prints the final value of thecounter
after both threads have finished.Monitor.Enter(lockObject);
in theIncrementCounter
method acquires a lock onlockObject
to ensure that only one thread can execute the code inside thetry
block at a time, preventing race conditions.try { counter++; } finally { Monitor.Exit(lockObject); }
ensures that the lock is released even if an exception occurs, maintaining proper synchronization
Mutex Class.
The Mutex
class is used for synchronization across multiple processes. It can be used to ensure that only one instance of a resource is accessed at a time, even if the resource is shared across different applications.
Example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
using System; using System.Threading; class Program { private static int counter = 0; private static Mutex mutex = new Mutex(); static void Main() { Thread thread1 = new Thread(IncrementCounter); Thread thread2 = new Thread(IncrementCounter); thread1.Start(); thread2.Start(); thread1.Join(); thread2.Join(); Console.WriteLine("Final counter value: " + counter); } static void IncrementCounter() { for (int i = 0; i < 1000; i++) { mutex.WaitOne(); try { counter++; } finally { mutex.ReleaseMutex(); } } } } |
- The code above uses multithreading to increment a shared counter variable.
- A
Mutex
is used to ensure that only one thread can increment the counter at a time, preventing race conditions. - Two threads (
thread1
andthread2
) are created and started, both running theIncrementCounter
method. - Each thread increments the counter 1000 times.
- The
mutex.WaitOne()
method is called to acquire the mutex before incrementing the counter. - The
counter++
operation is enclosed in atry
block to ensure the mutex is released in thefinally
block, even if an exception occurs. - The
thread1.Join()
andthread2.Join()
methods are called to wait for both threads to finish before printing the final counter value.
Semaphore and SemaphoreSlim Classes.
The Semaphore
and SemaphoreSlim
classes limit the number of threads that can access a resource concurrently. SemaphoreSlim
is a lightweight version of Semaphore
.
Let’s understand this with an example.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
using System; using System.Threading; class Program { private static SemaphoreSlim semaphore = new SemaphoreSlim(2); // Allow up to 2 threads static void Main() { for (int i = 0; i < 5; i++) { Thread thread = new Thread(AccessResource); thread.Start(i); } } static void AccessResource(object id) { Console.WriteLine($"Thread {id} waiting to enter..."); semaphore.Wait(); try { Console.WriteLine($"Thread {id} entered."); Thread.Sleep(1000); // Simulate work } finally { Console.WriteLine($"Thread {id} leaving."); semaphore.Release(); } } } |
- The code above demonstrates the use of a
SemaphoreSlim
to control access to a shared resource by multiple threads. - A
SemaphoreSlim
object is created with an initial count of 2, allowing up to 2 threads to enter the critical section simultaneously. - In the
Main
method, 5 threads are created and started, each running theAccessResource
method. - Each thread attempts to enter the critical section by calling
semaphore.Wait()
. - If the semaphore count is greater than 0, the thread enters the critical section and the count is decremented.
- Inside the critical section, the thread simulates work by sleeping for 1 second.
- After completing the work, the thread releases the semaphore by calling
semaphore.Release()
, and incrementing the semaphore count. - The console output shows when each thread is waiting, entering, and leaving the critical section.
Try running the code above to see the output you get.
Synchronization is essential in multithreading to ensure safe access to shared resources. C# provides various synchronization primitives like lock
, Monitor
, Mutex
, and Semaphore
to help manage concurrent access. You can write robust and thread-safe applications by understanding and using these synchronization mechanisms.