Big O Notation is a critical concept in computer science for analyzing the efficiency of algorithms. Understanding it allows developers to gauge how algorithms will perform as input sizes grow. In this post, we’ll explore different types of Big O notations, compare them, and walk through an example of improving inefficient code using Golang.
Table of contents
Open Table of contents
What is Big O Notation?
Big O Notation is a mathematical notation that describes the upper bound of an algorithm’s runtime as a function of the input size. It helps us understand the worst-case time complexity, or how an algorithm scales when processing increasingly large datasets.
Common Types of Big O Notation
O(1) Constant Time
An algorithm with O(1) time complexity performs its task in constant time, no matter the input size. An example of O(1) is accessing an array element by index:
arr := []int{1, 2, 3, 4}
element := arr[2] // O(1) - constant time
O(n) Linear Time
In O(n) complexity, the algorithm’s runtime grows linearly with the input size. A simple for-loop that iterates over all elements in an array is O(n):
for _, v := range arr {
fmt.Println(v) // O(n) - linear time
}
O(log n) Logarithmic Time
Algorithms with O(log n) complexity reduce the problem size at each step, such as binary search (learn more about binary search here):
func binarySearch(arr []int, target int) int {
low := 0
high := len(arr) - 1
for low <= high {
mid := low + (high - low) / 2
if arr[mid] == target {
return mid // target found
}
if arr[mid] < target {
low = mid + 1 // search the right half
} else {
high = mid - 1 // search the left half
}
}
return -1 // target not found
}
O(n²) Quadratic Time
O(n²) algorithms have a runtime proportional to the square of the input size. This often happens with nested loops:
for i := 0; i < len(arr); i++ {
for j := 0; j < len(arr); j++ {
fmt.Println(arr[i], arr[j]) // O(n²) - quadratic time
}
}
Comparing Notations
-
O(1) is the most efficient, as the runtime remains constant.
-
O(log n) is better than O(n) as it reduces the problem size.
-
O(n) is manageable for small to medium inputs but becomes inefficient for large datasets.
-
O(n²) should be avoided whenever possible, as it scales poorly.
Example: Optimizing Code in Golang
Let’s take an inefficient code example with O(n²) complexity and optimize it to a more efficient O(n) solution.
Inefficient Code (O(n²))
func findDuplicates(arr []int) []int {
duplicates := []int{}
for i := 0; i < len(arr); i++ {
for j := i + 1; j < len(arr); j++ {
if arr[i] == arr[j] {
duplicates = append(duplicates, arr[i])
}
}
}
return duplicates // O(n²) - due to nested loops
}
The nested loop causes quadratic complexity. We can optimize it by using a map for O(n) time complexity.
Optimized Code (O(n))
func findDuplicatesOptimized(arr []int) []int {
seen := make(map[int]bool)
duplicates := []int{}
for _, v := range arr {
if _, exists := seen[v]; exists {
duplicates = append(duplicates, v)
} else {
seen[v] = true
}
}
return duplicates // O(n) - linear time with hash map
}
By using a hash map (seen
), we avoid the nested loops and improve the complexity to O(n), making the algorithm much more efficient for larger input sizes.
Conclusion
Understanding how Big O notation works—at least at a basic laevel—can help improve inefficient code, making it more scalable and enhancing performance in real-world scenarios.