Multi-threading is the ability to be able to execute multiple threads concurrently to perform various tasks parallelly.
static void Main(string[] args)
{
Thread t1 = new Thread(ThreadFun1);
Thread t2 = new Thread(ThreadFun2);
t1.Start();
t2.Start();
Console.ReadLine();
}
static void ThreadFun1()
{
for (int i = 0; i < 100000; i++)
{
Thread.Sleep(4000);
Console.WriteLine("This is function 1 & counter value is:" + i );
}
}
static void ThreadFun2()
{
for (int i = 0; i < 1000; i++)
{
Thread.Sleep(4000);
Console.WriteLine("This is function 2 & counter value is:" + i);
}
}
//Output:
This is function 2 & counter value is:0
This is function 1 & counter value is:0
This is function 1 & counter value is:1
This is function 2 & counter value is:1
In the above code, both functions run in parallel executing together, so to create a multi-threaded application in C# threading can be utilized by importing the System.Threading namespace.
Foreground Thread & Background Thread
There are two types of threads in C# one is the foreground thread and the background thread. The above example is the example of a foreground thread.
Foreground Thread is the thread that keeps on running even when the main application (main thread) quits or stops. Whereas the background thread quits if the main application quits.
ForeGround Thread Code:
static void Main(string[] args)
{
Thread Th1 = new Thread(Function1);
Th1.Start();
Console.WriteLine("The main application Quits");
}
static void Function1()
{
Console.WriteLine("function1 is Started");
Console.ReadLine();
Console.WriteLine("function1 is Ended");
}
// output
The main application Quits
function1 is Started
function1 is Ended
In this foreground thread execution, the application quits but Function1 is started and it waits for the user input to complete the thread.
Background Thread Code:
static void Main(string[] args)
{
Thread Th1 = new Thread(Function1);
Th1.IsBackground = true; // Making the thread background thread.
Th1.Start();
Console.WriteLine("The main application Quits");
}
static void Function1()
{
Console.WriteLine("function1 is Started");
Console.ReadLine();
Console.WriteLine("function1 is Ended");
}
//output
function1 is Started
The main application Quits
In background thread execution, the application doesn’t wait for the thread to complete, as soon as the application quits the thread quits as well.
How to debug your threading application in C#?
in a normal application, it is easier to have a debug point and step through the code but in the threading application, it is difficult to identify which thread is currently running.
To debug a thread, put a breakpoint and go to Debug –> Windows –> Threads.
In the thread debug screen, there is a main thread that is static void main, and the other two threads running pointing to the same method ThreadFun1. The yellow arrow shows currently the breakpoint is hit on thread no 6.
For making debugging more easier, the best practice is to provide the name of the thread. To assign the name to the thread use Thread.Name property.
Thread t1 = new Thread(ThreadFun1);
t1.Name = "First Thread"; // Assigning the name of the thread
Thread t2 = new Thread(ThreadFun1);
t2.Name = "Second Thread"; // Assigning the name of the thread
t1.Start();
t2.Start();
Console.ReadLine();
Further, if you want to freeze one thread and just debug on the other then “freeze and thaw” methods can be used. In the thread debugging screen right click on the thread to freeze and click freeze and to release it click “Thaw”.
How to make the objects thread-safe in while operating in multi-threaded C# application?
While working on multiple threads if one thread is closing the object and another is trying to use it at the same time then there will be a conflict. Thread safety is a property that allows avoiding data races-situations in which data are set to either correct or incorrect values, depending upon the order in which multiple threads access and modify the data. It is better to give each thread a private copy of the data if data sharing is not required. While sharing data, provide explicit synchronization to make certain that the program behaves in a deterministic manner.
To make sure the objects are thread-safe we can use synchronization techniques in a multi-threading environment. Three are three synchronization techniques in C# available currently are:
Lock :
In C# Lock keyword ensures that one thread is executing a piece of code at one time. The lock keyword ensures that one thread does not enter a critical section of code while another thread is in that critical section.
class Maths
{
public int num1;
public int num2;
Random o = new Random();
public void Divide()
{
for (long i = 0; i < 1000; i++)
{
lock(this) /// only one thread can operate within this lock scope
{
num1 = o.Next(1, 2);
num2 = o.Next(1, 2);
int result = num1 / num2;
num1 = 0;
num2 = 0;
Console.WriteLine(result.ToString());
}
}
}
}
We can also use Monitor.Enter(this); and Monitor.Exit(this); this is the same thing as Lock.
Mutex:
The thread-safe synchronization method using Locks/monitors have some limitations. Locks/monitors ensure thread safety for the threads which are in process, can be used for the threads generated by the applications themselves. But what happens when we need to deal with the threads that come from outside processes/applications, locks/monitors will have no control over it.
To overcome this limitation, we can use Mutex. Locks/monitors ensure the thread-safety for in-process whereas Mutex ensures the thread-safety for the out process. Normally, applications deal with the threads that are in process or within the application itself but what are the scenarios that we need to consider using Mutex.
For example: Say there is an application (App.exe), when you click the application 1st instance of the application runs, if you click the application again then 2nd instance opens and so on. This means these applications are running in separate threads of the process. To avoid this we can use Mutex.
The following code shows by using mutex we can check if a thread is already running and avoid running any new threads. A mutex can be named and unnamed depending on the requirement. If you wish to have a mutex be accessed system-wide, then a proper and unique name must be given, but if you wish to limit access to the process only from where it is created, then there is no need to provide a name.
static void Main(string[] args)
{
using (Mutex m1 = new Mutex(false,"MultiThreadingMutex")) // make initially owned and give the name of the mutex
{
if (!m1.WaitOne(5000,false))// Wait for 5 secs, don't exit context but show message.
{
Console.WriteLine("Another instance is running");
Console.ReadLine();
return;
}
Console.WriteLine("Application does something");
Console.WriteLine("Application quits");
Console.ReadLine();
}
}
When executing Exe the first time, the application runs ok, on the second attempt the application returns “Instance is already running”.
Semaphore & Semaphore Slim:
Semaphore is a thread synchronization method that utilizes the model of the railroad structure. In a set of railway tracks, only one train is allowed on one track at one time, and guarding this track is a Semaphore. A train must wait before entering the single track until the Semaphore permits it to enter the track, and when the train enters the track Semaphore changes its state to prevent other trains from entering. When the train leaves the track again Semaphore changes its state to allow another train to enter.
In C#, Semaphore is used to limit the number of threads that can have access to a shared resource concurrently by allowing one or more threads to enter into the critical section and execute the task concurrently with thread safety.
Semaphore is a more complex thread-safe synchronization method than lock/monitor and mutex. In lock and mutex can only handle one thread at a time but Semaphore & Semaphore Slim we can define the capacity of thread that can pass at one given time.
static Semaphore sem = null;
static void Main(string[] args)
{
try
{
sem = new Semaphore(3, 3,"SemaphoreName"); // three threads can pass at one time
}
catch (Exception ex)
{
sem = new Semaphore(3, 3, "SemaphoreName"); // if fails to instantitate then create a new one. here.
}
Console.WriteLine("Waiting at the gate - Ready to Own");
sem.WaitOne(); // From here till the release only 3 threads can access this area.
Console.WriteLine("Thread Owned -- I am inside");
Console.ReadLine();
sem.Release();
Console.ReadLine();
}
}
In the above code, Semaphore initialized 3 threads, which means only 3 threads can pass the region between sem.WaitOne() and sem.Release(). If 4th thread arrives it will have to wait in the gate. For three application instances it allows to get in the critical region but the 4th one waits on the gate.
AutoResetEvent and ManualResetEvent
AutoResetEvent
This is the way of thread-safe synchronization by utilizing the signaling methodology. Signaling means, one thread asks the other thread to wait until a certain task is finished and later signals to start its task from the same location where it paused.
static AutoResetEvent oAuto = new AutoResetEvent(false);
static void Main(string[] args) // Thread 1 - This is the application thrad
{
new Thread(TaskToDo).Start(); // This is 2nd thread -- it will execute taskToDo in a thread
Console.ReadLine();
oAuto.Set(); // release waitOne at 1
oAuto.Set(); // release waitOne at 2
}
static void TaskToDo()
{
Console.WriteLine("Starting 1 ...");
oAuto.WaitOne();
Console.WriteLine("Finishing 1...");
Console.WriteLine("Starting 2 ...");
oAuto.WaitOne();
Console.WriteLine("Finishing 2...");
}
In the code above void Main is the main program that runs on thread 1 and another thread 2 is assigned to TaskToDo. When thread 2 runs it receives the signal from AutoResetEvent to wait and so it waits. Then when the main thread sends the signal to resume then only it continues finishing its job.
ManualResetEvent
ManualResetEvent also helps thread-safe synchronization by utilizing the signaling methodology but what is the difference between AutoResetEvent and ManualResetEvent?
In AutoResetEvent there is one WaitOne() for the signal to pause and one Set() for the signal to resume. If there are multiple waitOne() signals issued in the process, then for each waitOne() it will require individual Set() to release the wait, if not provided the thread will not resume. In ManualResetEvent we can just issue a one Set() command to release all the WaitOne(0 signals.
static ManualResetEvent oAuto = new ManualResetEvent(false);
static void Main(string[] args) // Thread 1 - This is the application thrad
{
new Thread(TaskToDo).Start(); // This is 2nd thread -- it will execute taskToDo in a thread
Console.ReadLine();
oAuto.Set(); // release waitOne at 1
}
static void TaskToDo()
{
Console.WriteLine("Starting 1 ...");
oAuto.WaitOne();
Console.WriteLine("Finishing 1...");
Console.WriteLine("Starting 2 ...");
oAuto.WaitOne();
Console.WriteLine("Finishing 2...");
}
Thread Pooling
Normally when threading is used the process is to Request Thread –> Create Thread Object –> Alocate Resources –> Execute Task –> Thread is sent to garbage collector. In thread pooling, instead of sending the thread to garbage collector, it can be sent to thread pool so that it can be used by some other task. The main benefits of thread pooling are:
1. Don’t need to recreate and allocate resources to thread everytime thread is required, which improves performance.
2. Limit on the thread pool:- If you spun too many threads then it will have significant load on the operating system. So limiting the creation of threads leads to significant performance advantage.
class Program
{
static void Main(string[] args) // Thread 1 - This is the application thrad
{
ThreadPool.QueueUserWorkItem(new WaitCallback(Function1)); // create thread pool passing the callback function which accepts an obejct.
Console.ReadLine();
}
static void Function1(object o)
{
Console.WriteLine("Function1 called");
}
}
The output of using ThreadPool and just using normal threads looks same but in terms performance they differ. Lets check performance of the thread pool.
static void Main(string[] args) // Thread 1 - This is the application thrad
{
cConsole.WriteLine("starting warmup to converte in IL code");
for (int i = 0; i < 10; i++)
{
WithThreadPool();
WithoutThreadPool();
}
Console.WriteLine("end warmup ");
// using stopwatch to record the time of the execution
Stopwatch oWatch = new Stopwatch();
Console.WriteLine("Recording With thread Pool");
oWatch.Start();
WithThreadPool();
oWatch.Stop();
Console.WriteLine(oWatch.ElapsedTicks);
oWatch.Reset();
Console.WriteLine("Recording With thread only");
oWatch.Start();
WithoutThreadPool();
oWatch.Stop();
Console.WriteLine(oWatch.ElapsedTicks);
oWatch.Reset();
Console.ReadLine();
}
static void WithoutThreadPool()
{
for (int i = 0; i < 10; i++)
{
//creates 10 threads
Thread thread = new Thread(Function1);
thread.Start();
}
}
static void WithThreadPool()
{
for (int i = 0; i < 10; i++)
{
//using thread pool
ThreadPool.QueueUserWorkItem(new WaitCallback(Function1));
}
}
static void Function1(object o)
{
}
//Output
starting warmup to converte in IL code
end warmup
Recording With thread Pool
457
Recording With thread only
2350885
Even calling an empty function with or without thread pool makes a huge difference in the performance.
Task Parallel Library (TPL)
Thread only runs parallelly but doesn’t utilized the hardware parallelly but to utilize the hardware processor to actually run tasks on separate cores we can use TPL (Task Parallel Library). TPL is the modified wrapper of the System.Threading which does more and powerful tasks than the Threading. According to Microsoft:
The TPL scales the degree of concurrency dynamically to most efficiently use all the processors that are available. In addition, the TPL handles the partitioning of the work, the scheduling of threads on the ThreadPool, cancellation support, state management, and other low-level details. By using TPL, you can maximize the performance of your code while focusing on the work that your program is designed to accomplish.
To check the hardware resources Thread or TPL uses you monitor using “perfmon” (execute “perfmon” in run). On performance monitor, delete the existing counters and add counters that are required to monitor the processors. Select Processor Time and select all the instances. (each instance represent each of the processors.
If you run the application and check the monitor, you can notice that not all the processes are utilized to the optimal some are over utilized but others are under utilized. This means the actual parallelism is not achieved but there is only time slicing happening in the hardware processors. To utilize the hardware to the maximum and execute the parallel threads to the most optimised way we can use TPL.
static void Main(string[] args) // Thread 1 - This is the application thrad
{
//Thread o1 = new Thread( RunMillionRecords);
// o1.Start();
// It requires where to start and where to go end, and the task to call
Parallel.For(0, 1000000, x => RunMillionRecords());
Console.ReadLine();
}
static void RunMillionRecords()
{
string a = "";
for (int i = 0; i < 1000000; i++)
{
a = a + "a";
}
Console.WriteLine(a);
}
The performance monitor shows vast difference in the utilization of the hardware and that will significantly improve the performance of the application.
Task automatically pools the threads and so the performance is far better than just using the thread.
static void Main(string[] args) // Thread 1 - This is the application thrad
{
for (int i = 0; i < 1000; i++)
{
// Declaring the task that automatically pools the threads
Task t = new Task(Function1);
//Thread t = new Thread(Function1);
t.Start();
}
Console.ReadLine();
}
private static void Function1()
{
string a = "";
a = "asdf";
}
The differences between the Tasks and Thread are:
1. TPL does multi-core execution but Thread has processor-affinity and tries to run all threads in one processor.
2. TPL automatically does thread-pooling whereas while using Thread we have to manage thread-pooling.
3. When we work in Thread we call it thread whereas we changed it to Task. Task means a lot. Tasks thinks only in terms of task and just think about what you want to do or run. The logic behind managing the threads at the background is handled by Task. Task has encapsulated the Thead logic behind, in other words Task is the abstraction and asks what it needs to do whereas Thread asks for how to do?