**Asymptotic notations** are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value.

• **For example**: In bubble sort, when the input array is already sorted, the time taken by the algorithm is linear i.e. the best case.

• They are used to represent the complexities of algorithm for asymptotic analysis.

## Types of Asymptotic notations

**There are three types of asymptotic notations that are commonly used:**

**1. Big-O Notation (O-notation):**

**Big-O notation** represents the upper bound of the running time of an algorithm. Therefore, it gives the **worst-case complexity** of an algorithm or the longest amount of time an algorithm can possibly take to complete.

• If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0.

**The execution time serves as an upper bound on the algorithm’s time complexity.**

**Mathematical Representation of Big-O Notation:**

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

**Properties of Big-O Notation:**

**Asymptotic Upper Bound**: Big O notation describes the upper bound or worst-case scenario of the growth rate of a function. It represents an upper limit on the rate of growth of the function as the input size increases.**Simplicity**: Big O notation simplifies the analysis of algorithms by focusing on the dominant term or the term with the highest order of magnitude in the function. It ignores constant factors and lower-order terms, providing a high-level overview of algorithmic efficiency.**Relative Comparison**: Big O notation enables relative comparison between different algorithms by abstracting away specific implementation details and focusing on their overall efficiency classes. It allows developers to assess the scalability and performance characteristics of algorithms without getting bogged down in implementation-specific complexities.**Scaling Behavior**: Big O notation describes how the runtime or space usage of an algorithm scales with the size of the input. Algorithms with lower-order complexities (e.g., O(log n), O(n)) are more efficient and scalable than those with higher-order complexities (e.g., O(n^2), O(2^n)) for large inputs.**Multiplicative Constants**: Big O notation disregards multiplicative constants when analyzing algorithmic efficiency. This means that O(2n) and O(n) are considered equivalent in terms of asymptotic behavior, as both grow linearly with the size of the input.**Drop Lower-Order Terms**: Big O notation drops lower-order terms when analyzing algorithmic efficiency. For example, O(n^2 + n) simplifies to O(n^2), as the quadratic term dominates the growth rate for large values of n.**Upper Bound**: Big O notation provides an upper bound on the growth rate of a function, indicating the maximum rate at which the function’s value can increase with the size of the input. It represents the worst-case scenario but does not necessarily reflect the average or best-case behavior of an algorithm.

## 2. Theta Notation (Θ-Notation):

**Theta notation** encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the **average case **complexity of an algorithm.

**Mathematical Representation of Theta notation:**

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}

## 3.Omega Notation (Ω-**Notation):**

**Omega notation** represents the lower bound of the running time of an algorithm. Thus, it provides the **best case** complexity of an algorithm or the best amount of time an algorithm can possibly take to complete.

Let g and f be the function from the set of natural numbers to itself. The function f is said to be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥ n0.

**Mathematical Representation of Omega notation :**

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

**Properties of Asymptotic Notations:**

**1. General Properties:**

If **f(n)** is **O(g(n))** then **a*f(n)** is also **O(g(n))**, where **a** is a constant.

**Example:**

f(n) = 2n²+5 is O(n²)

then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).

Similarly, this property satisfies both Θ and Ω notation.

**2. Reflexive Properties**:

Reflexive properties are always easy to understand after transitive.

If f(n) is given then f(n) is O(f(n)). Since *MAXIMUM VALUE OF f(n) will be f(n) ITSELF!*

Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.

**Example:**

f(n) = n² ; O(n²) i.e O(f(n))

Similarly, this property satisfies both Θ and Ω notation.

**3. Transitive Properties:**

If **f(n)** is** O(g(n))** and **g(n)** is **O(h(n))** then **f(n) = O(h(n))**.

**Example:**

If f(n) = n, g(n) = n² and h(n)=n³

n is O(n²) and n² is O(n³) then, n is O(n³)

Similarly, this property satisfies both Θ and Ω notation.