Analysis of the Algorithm

  • We cannot simply compare two algorithms directly. To compare two algorithms, factors such as the hardware and the tools we are using also have a remarkable impact.
    → For example, the performance of algorithms varies with variations in the operating systems, the processors, etc. So, even though we’ve calculated the performance of two algorithms on a particular system, there are chances that they will perform differently on different systems.
  • To tackle the above-stated issues, we use the asymptotic analysis to compare the space and time complexity of the algorithms. The asymptotic analysis analyzes two algorithms based on their performance when the input size is varied (increased or decreased).
  • Even if the size of the input is the same sometimes, the running time of the algorithm still may vary. Hence, we perform the best, average, and worst-case analyses of the algorithms, covering all the possible cases where the algorithm may behave abruptly high or low.

Big Oh Notation | O-notation

Big O is the upper bound as n tends to infinity.

O(N) takes at most N steps.

  • O(f(n)) is the set of all functions with a smaller or the same order of growth as f(n).
  • O(n²) = {n², l00n + 5, logn, ..}
  • → n³ ∉ O(n²)

Omega Notation | Ω notation

Ω(N) takes at least N steps.

  • Ω(f(n)) is the set of all functions with a or the same order of growth as f(n)
  • Ω(n²) = { n² n², n² +10, …}

Theta Notation | Θ notation

Θ(N) takes at least N and at most N steps.

Little Oh Notation | o-notation

  • o(f(n)) is the set of all functions with a smaller rate of growth than f(n).
  • o(n²) = {l00n + 5, logn, ..}
  • 2n = o(n²)
  • 2n² ≠ o(n²)
  • If T(n) = o(f(n)), T(n) is also O(f(n)) but not the converse.

Little-Omega notation | ω — notation

  • ω-notation is the set of all functions with a larger rate of growth than f(n).
  • ω(n²) = {n³, n⁴ +10, …}
  • Example —
    → 2n² = ω(n)
    2n ≠ ω(n)
  • If T(n) is w(f(n)), then T(n) is also Ω(f(n)) but not the converse.

Big O and Big Omega of worst-case time complexity

Source — https://stackoverflow.com/a/72997863

These concepts can be quite confusing.

O, Ω and Θ aren’t actually tied to worst, best and average time complexities. They just describe relations between functions, or complexity classes.

It is not quite correct to say O describes worst-case, Ω describes best case and Θ describes average.
Rather, O describes an upper-bound, Ω a lower bound and Θ describes both at once.

For instance, it is perfectly correct to say that Quicksort has an average time complexity of O(n log n) and a worst-case complexity of O(n2). What is meant is that they are no higher than these complexities.

In short:

f(n) = O(g(n)) means f(n) is bounded above by g(n). Analogous to .

f(n) = Ω(g(n)) means f(n) is bounded below by g(n). Analogous to .

f(n) = Θ(g(n)) means f(n) is bounded both above and below by g(n). Analogous to =.

In practice you often see big-O used when big-Θ could have been more informative. In general, when you publish a new algorithm and you want to claim that it is asymptotically faster than others, you might just say that it has a worst-case time complexity of O(n^2) when the previously known fastest algorithm was e.g. O(n³). Everyone then understands that you have found an asymptotically faster algorithm. Maybe it turns out that your algorithm is actually O(n^(1.99)) but it was easier to prove that it was O(n2). Then it is a correct statement because n^1.99 = O(n^2) but it would not have held true for Θ.

And finally, since you wanted an example of where O and Ω might differ: Quicksort has average time complexity O(n log n). But it is also correct to say it has average time complexity O(n^100) because

n log n = O(n^100).

Similarly, we can say it is Ω(1) because it is definitely higher or equal to constant-time.

What exactly does big Ө notation represent?

Source https://stackoverflow.com/a/12338937

First let’s understand what big O, big Theta and big Omega are. They are all sets of functions.

  • Big O is giving upper asymptotic bound, while
  • Big Omega is giving a lower bound.
  • Big Theta gives both.

Omega is the lower bound as n tends to infinity.

Theta is both the upper and lower bound as n tends to infinity.

Note that all bounds are only valid “as n tends to infinity”, because the bounds do not hold for low values of n (less than n0). The bounds hold for all nn0, but not below n0 where lower order terms become dominant.

Θ(f(n)) is the set of all functions with the same order growth as f(n)

Ω(f(n)) is the set of all functions with a larger or the same order of growth as f(n). Ω(n²) = { n², n³, n⁴+10,….}
→ 6n+100 ∉ Ω(n²) [ not matter what value of c1 we choose it will never bound 6n+100 i.e. It’s not possible to satisfy this c1*n² <6n + 100]
→ n³
Ω(n²) — true for n ≥ c1 i.e. n³ is bounded by constant multiple of n²

  • Everything that is Ө(f(n)) is also O(f(n)), but not the other way around. T(n) is said to be in Ө(f(n)) if it is both in O(f(n)) and in Omega(f(n)).
  • In sets terminology, Ө(f(n)) is the intersection of O(f(n)) and Omega(f(n))
  • For example, merge sort worst case is both O(n*log(n)) and Omega(n*log(n)) - and thus is also Ө(n*log(n)), but it is also O(n^2), since n^2 is asymptotically "bigger" than it. However, it is not Ө(n^2), Since the algorithm is not Omega(n^2).

A bit deeper mathematic explanation

O(n) is asymptotic upper bound. If T(n) is O(f(n)), it means that from a certain n0, there is a constant C such that T(n) <= C * f(n). On the other hand, big-Omega says there is a constant C2 such that T(n) >= C2 * f(n))).

Do not confuse! [explained below]

Not to be confused with worst, best and average cases analysis: all three (Omega, O, Theta) notation are not related to the best, worst and average cases analysis of algorithms. Each one of these can be applied to each analysis.

We usually use it to analyze complexity of algorithms (like the merge sort example above). When we say “Algorithm A is O(f(n))", what we really mean is "The algorithms complexity under the worst^ case analysis is O(f(n))" - meaning - it scales "similar" (or formally, not worse than) the function f(n).

Why we care for the asymptotic bound of an algorithm?

Well, there are many reasons for it, but I believe the most important of them are:

  1. It is much harder to determine the exact complexity function, thus we “compromise” on the big-O/big-Theta notations, which are informative enough theoretically.
  2. The exact number of ops is also platform dependent. For example, if we have a vector (list) of 16 numbers. How much ops will it take? The answer is: it depends. Some CPUs allow vector additions, while other don’t, so the answer varies between different implementations and different machines, which is an undesired property. The big-O notation however is much more constant between machines and implementations.

To demonstrate this issue, have a look at the following graphs:

It is clear that f(n) = 2*n is "worse" than f(n) = n. But the difference is not quite as drastic as it is from the other function. We can see that f(n)=logn quickly getting much lower than the other functions, and f(n) = n^2 is quickly getting much higher than the others.
So - because of the reasons above, we "ignore" the constant factors (2* in the graphs example), and take only the big-O notation.

In the above example, f(n)=n, f(n)=2*n will both be in O(n) and in Omega(n) - and thus will also be in Theta(n).

  • On the other hand - f(n)=logn will be in O(n) (it is "better" than f(n)=n), but will NOT be in Omega(n) - and thus will also NOT be in Theta(n).
  • Symmetrically, f(n)=n^2 will be in Omega(n), but NOT in O(n), and thus - is also NOT Theta(n).

^Usually, though not always. when the analysis class (worst, average and best) is missing, we really mean the worst case.

Asymptotic notation is independent of type of running time

  • Ω, Θ, O notations for T(n) are independent of whether we are looking at worst-case T(n) or average-case T(n) or best-case T(n) i.e. all three notations can be used for all best, worst and average case.
  • Example:
    → Worst-case T(n) for insertion sort can be expressed as Θ(n²), O(n²), Ω(n²), Ω(n), O(n³), O(n4), Ω(1), O(n²logn) etc.
    → Average-case T(n) as Θ(n²) etc.
    → Best-case T(n) can be expressed as Θ(n), O(n), Ω(n), O(n²), Ω(1), Ω(logn), O(n¹⁰), etc.

Clarification about semantics of asymptotic notations

  • Suppose the worst-case T(n) for some algorithm is O(n²).
    This means the algo will run in time ≤ cn² (for large n) for every input of size n.
    → Suppose the worst-case. T(n) for some algo is Θ(n²).
    This does NOT imply that the algo will run in time. Θ(n²) for every input of size 1.
  • Suppose the best-case T(n) for some algo is Ω(n).
    This means the algo will run in time ≥ cn (for large enough n) for every input of size n.
    → But suppose the best-case T(n) for some algo is Θ(n). This does not mean that the algo will run in time Θ(n) for every input of size n.
  • Or simply
    → An upper bound on the worst-Case time is an upper bound on all inputs of that size.
    → A lower bound on the best-case time is a lower bound on all inputs of that size.
    → A lower bound on the worst-case time is NOT a lower bound on all inputs of that size.
    → An upper bound on the best-case time is NOT an upper bound on all inputs of that size.

Using Limits in asymptotic analysis

Reflexivity, symmetry and transitivity properties of asymptotic notations

Transitive — Satisfied by Big-Theta, Big-Omega, Big-Oh, little o, little omega

Reflexivity — Satisfied by Big-Theta, Big-Oh, Big-Omega

Symmetry — Satisfied Big-Theta

--

--