Data Structure & Algorithm in Java

  1. Home
  2. Docs
  3. Data Structure & Alg...
  4. Introduction to Data Stru...
  5. Asymptotic notations

Asymptotic notations

Types of Asymptotic notations
  1. Asymptotic Upper Bound: Big O notation describes the upper bound or worst-case scenario of the growth rate of a function. It represents an upper limit on the rate of growth of the function as the input size increases.
  2. Simplicity: Big O notation simplifies the analysis of algorithms by focusing on the dominant term or the term with the highest order of magnitude in the function. It ignores constant factors and lower-order terms, providing a high-level overview of algorithmic efficiency.
  3. Relative Comparison: Big O notation enables relative comparison between different algorithms by abstracting away specific implementation details and focusing on their overall efficiency classes. It allows developers to assess the scalability and performance characteristics of algorithms without getting bogged down in implementation-specific complexities.
  4. Scaling Behavior: Big O notation describes how the runtime or space usage of an algorithm scales with the size of the input. Algorithms with lower-order complexities (e.g., O(log n), O(n)) are more efficient and scalable than those with higher-order complexities (e.g., O(n^2), O(2^n)) for large inputs.
  5. Multiplicative Constants: Big O notation disregards multiplicative constants when analyzing algorithmic efficiency. This means that O(2n) and O(n) are considered equivalent in terms of asymptotic behavior, as both grow linearly with the size of the input.
  6. Drop Lower-Order Terms: Big O notation drops lower-order terms when analyzing algorithmic efficiency. For example, O(n^2 + n) simplifies to O(n^2), as the quadratic term dominates the growth rate for large values of n.
  7. Upper Bound: Big O notation provides an upper bound on the growth rate of a function, indicating the maximum rate at which the function’s value can increase with the size of the input. It represents the worst-case scenario but does not necessarily reflect the average or best-case behavior of an algorithm.

Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average case complexity of an algorithm.


Mathematical Representation of Theta notation:

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm or the best amount of time an algorithm can possibly take to complete.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0.

2 1

Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.


f(n) = 2n²+5 is O(n²) 
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).

Similarly, this property satisfies both Θ and Ω notation.

Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.


f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.

If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).


If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)

Similarly, this property satisfies both Θ and Ω notation.

How can we help?

Leave a Reply

Your email address will not be published. Required fields are marked *