# Big-O Notation

In Data structure and Algorithm, Big-O notation is a commonly used mathematical notation that describes the upper bound of the growth rate of a function or algorithm. It is used to analyze the time complexity of an algorithm and represents the worst-case scenario of the algorithm.

In simple terms, it tells us how fast the algorithm will run as the input size increases.

The Big-O notation is represented using the uppercase letter “O” followed by a function. For example, O(n) represents a function that grows linearly with the input size, while O(n^2) represents a function that grows quadratically with the input size.

## Working Principle

Here’s a brief explanation of how Big-O notation works:

• Suppose we have an algorithm with a time complexity function f(n), where n is the input size.
• We want to find out how f(n) grows as n approaches infinity, to understand the efficiency of the algorithm.
• We can represent this growth rate using Big-O notation by identifying the dominant term of the function, or the term that grows the fastest.
• We ignore any constants or lower-order terms and only focus on the dominant term.
• We then represent the function using the Big-O notation with the dominant term as the input size, ignoring any constants.

Let’s consider an example to understand this better:

Suppose we have an algorithm that searches for a specific element in an unsorted array. The algorithm uses a linear search approach, where it iterates through each element in the array until it finds the target element.

Here’s the pseudo-code for the algorithm:

``````function search(arr, x):
for i in range(len(arr)):
if arr[i] == x:
return i
return -1
``````

The time complexity of this algorithm can be expressed using Big-O notation as O(n), where n is the size of the array. This is because the algorithm has to iterate over all elements in the array to find the target element, which takes linear time in the worst-case scenario.

In this case, the dominant term of the time complexity function is n, which represents the size of the array. We ignore the constant factor of 1, as well as the lower-order terms of the function, such as the time taken to initialize variables or return a value. Therefore, the Big-O notation of this algorithm is O(n).

In summary, Big-O notation is a useful tool for analyzing the efficiency of algorithms and comparing them to one another. It helps us understand how quickly an algorithm can solve a problem as the input size increases and provides a way to measure the scalability of an algorithm.