Multi vs Multi

Tarun Jain
6 min readMay 19, 2020

--

Parent/Index article

· Multi Core vs. Hyper-threading
· MultiTasking
· Multitasking, MultiProcessing, Multithreading on Single core
· Process vs Thread technical details -
· Concurrency vs Parallelism
Concurrency
· Other articles

Multi Core vs. Hyper-threading

Multi Core

  • Provides multiple cores in CPU.
  • Dual-core can run two different processes at a time. (quad-core = 4 cores = 4 processes at a time)

Hyper-threading

  • Make one physical processing unit to two logical processing units.
  • Little bit of cheat to make it looks like two different processing unit.
  • Keep switching between two tasks, but actually one at a time.
  • Even though divided logically, CPU with hyper-threading performs better than CPU without hyper-threading.

Comparison

  • i5: 4 cores without hyper-threading => 4 processing units
  • i7: 4 cores with hyper-threading => 8 processing units(logically)

Source : http://beyondthegeek.com/2017/09/29/multi-core-vs-hyper-threading/

MultiTasking

In a multitasking operating system, more than one task is executed at the same time. In this technique, multiple tasks, also known as processes, share common processing resources such as a CPU.

Types of Multitasking Operating System:

There are two types of multitasking operating system

  • Cooperative Multitasking:
    Windows and Mac OS used cooperative multitasking. A Windows program would do some small unit of work in response to a message and then relinquish the CPU to the operating system until the program got another message. That worked well, as long as all programs were written with consideration for other programs and had no bugs.
  • Preemptive Multitasking:
    Desktop operating systems use preemptive multitasking. Unix used this form of multitasking from the beginning. Windows starting using preemptive multitasking with Windows NT and Windows 95. Macintosh gained preemptive multitasking with OS X. This type of operating system programs to tell them it’s time to give another program a turn with the CPU.

Source : https://medium.com/@rmsrn.85/multitasking-operating-system-types-and-its-benefits-deb1211c1643

Multitasking, MultiProcessing, Multithreading on Single core

  • A single core processor can run a single process(task) at a time.
  • Multithreading
    → Only one thread can execute at a time but the Operating system achieves Multithreading using time slicing(thread context switch).
    → This Thread switching happens frequently enough that the user perceives the threads as running at the same time (but they aren’t running parallel!)and it occurs inside the one process.
    On single core system, multithreaded program will take almost same time and there will no speed benefit or performance benefit.
  • MultiTasking
    → single core cpu gives appearance that cpu is doing two things at once but actually is timslicing between tasks so quickly it appears to be running tasks simultaneously
    → CPU will timeslice between two task
    → Will take 2x to finish both the tasks ( x for 1st task and x for 2nd task)
  • MultiProcessing — means you got more than one cpu (quad core, dual core etc. ) so technically multiprocessing is not possible on single core cpu instead multitasking is possible.

Process vs Thread technical details -

  • All the threads of a process live in the same memory space, whereas processes have their separate memory space.
  • Threads are more lightweight and have lower overhead compared to processes. Spawning processes is a bit slower than spawning threads.
  • Sharing objects between threads is easier, as they share the same memory space. To achieve the same between process, we have to use some kind of IPC (inter-process communication) model, typically provided by the OS.

Concurrency vs Parallelism

Concurrency

A concurrent program

Is one that can be decomposed into constituent parts and each part can be executed out of order or in partial order without affecting the final outcome.

Concurrent system

→ A system capable of running several distinct programs or more than one independent unit of the same program in overlapping time intervals is called a concurrent system.
→ In concurrent systems, the goal is to maximize throughput and minimize latency.

The execution of two programs or units of the same program may not happen simultaneously.

  • A concurrent system can have two programs in progress at the same time where progress doesn’t imply execution.
  • One program can be suspended while the other executes. Both programs are able to make progress as their execution is interleaved.
  • For example, a browser running on a single core machine has to be responsive to user clicks but also be able to render HTML on screen as quickly as possible.
  • Concurrent systems achieve lower latency and higher throughput when programs running on the system require frequent network or disk I/O.
  • The classic example of a concurrent system is that of an operating system running on a single core machine. Such an operating system is concurrent but not parallel.
  • It can only process one task at any given point in time but all the tasks being managed by the operating system appear to make progress because the operating system is designed for concurrency. Each task gets a slice of the CPU time to execute and move forward.
  • Juggler analogy, a concurrent juggler is one who can juggle several balls at the same time. However, at any one point in time, he can only have a single ball in his hand while the rest are in flight. Each ball gets a time slice during which it lands in the juggler’s hand and then is thrown back up. A concurrent system is in a similar sense juggling several processes at the same time.

Parallelism

A parallel system is one which necessarily has the ability to execute multiple programs at the same time.

  • Usually, this capability is aided by hardware in the form of multicore processors on individual machines or as computing clusters where several machines are hooked up to solve independent pieces of a problem simultaneously.
  • Remember an individual problem has to be concurrent in nature, that is portions of it can be worked on independently without affecting the final outcome before it can be executed in parallel.
  • In parallel systems the emphasis is on increasing throughput and optimizing usage of hardware resources. The goal is to extract out as much computation speedup as possible.
  • Example problems include matrix multiplication, 3D rendering, data analysis, and particle simulation.
  • Juggler analogy, a parallel system would map to at least two or more jugglers juggling one or more balls. In the case of an operating system, if it runs on a machine with say four CPUs then the operating system can execute four tasks at the same time, making execution parallel. Either a single (large) problem can be executed in parallel or distinct programs can be executed in parallel on a system supporting parallel execution.

Concurrency vs Parallelism

  • From the above discussion it should be apparent that a concurrent system need not be parallel, whereas a parallel system is indeed concurrent.
  • Additionally, a system can be both concurrent and parallel e.g. a multitasking operating system running on a multicore machine.
  • Concurrency is about dealing with lots of things at once. Parallelism is about doing lots of things at once.
  • Last but not the least, you’ll find literature describing concurrency as a property of a program or a system whereas parallelism as a runtime behaviour of executing multiple tasks.
  • We end the lesson with an analogy, frequently quoted in online literature, of customers waiting in two queues to buy coffee. Single-processor concurrency is akin to alternatively serving customers from the two queues but with a single coffee machine, while parallelism is similar to serving each customer queue with a dedicated coffee machine.

Other articles

https://iorilan.medium.com/multi-processing-vs-multi-threading-vs-async-await-vs-goroutine-983716514e03

--

--