Binary Search Explained and Implemented in Python

Understand binary serach

The whole idea behind binary search is that you can take advantage of having the list you search in ordered.

Binary search explained

Say, we need to search for 7 and we look at the element in the middle of the list.

Binary search explained

Then we can conclude, in this example, that 7 is not part of the right side of the list, as all numbers must be greater than 10. That is because the list is ordered.

Next we ask the in the middle of the left side of the list which is left unknown is 7 is there.

Binary search explained

As -9 is less than 7, we know that 7 cannot be before in the list and are left with the remaining element between -9 and 10.

Binary search explained

As -3 is less than 7, we know that if 7 is part of the list, then it must be to the right of -3. Also, we know it must be before 10 (our first comparison).

Binary search explained

Hence, it can only be in the last spot left. But as it is 8, we now know that 7 is not part of the list.

Why is that impressive?

Consider if the list was unsorted. Then you would have to look through the entire list to make sure 7 was not part of it.

Binary search explained

In terms of complexity that means if the list contains N element, it must make N comparisons to search of an element. That is O(N) time complexity.

The binary search on the other hand is way more efficient. For each comparison the algorithm can skip one half of the list. That is O(log(N)) time complexity.

The source code

def recursive_binary_search(my_list, element):
    return recursive_binary_search_internal(my_list, element, 0, len(my_list) - 1)


def recursive_binary_search_internal(my_list, element, low, high):
    if low > high:
        return False
    else:
        mid = (low + high)//2
        if my_list[mid] == element:
            return True
        else:
            if my_list[mid] > element:
                return recursive_binary_search_internal(my_list, element, low, mid - 1)
            else:
                return recursive_binary_search_internal(my_list, element, mid + 1, high)


def binary_search(my_list, element):
    low = 0
    high = len(my_list) - 1
    while low <= high:
        mid = (low + high)//2
        if my_list[mid] == element:
            return True
        else:
            if my_list[mid] > element:
                high = mid - 1
            else:
                low = mid + 1
    return False


def main():
    my_list = [-29, -16, -15, -9, -6, -3, 8, 10, 17, 19, 27, 47, 54, 56, 60]
    print(my_list)
    element = 56
    print("Binary Search:", binary_search(my_list, element), element)
    print("Recursive Binary Search:", recursive_binary_search(my_list, element), element)


if __name__ == "__main__":
    main()

There are two implementations of the binary search in the above example.

Want to learn how to sort a list?

For the insertion sort read this tutorial. For the merge sort read this tutorial.

A Simple Implementation of Merge Sort in Python

Understand Merge Sort in 6 Minutes

Understand Merge Sort in 6 Minutes

Merge sort is one of the algorithms you need to master. Why?

Because it is in the class of efficient algorithms and is easy to understand.

But what does efficient mean?

Let’s get back to that. First how does Merge Sort work?

Merge Sort explained

It takes the list and breaks it down into two sub lists. Then it takes these sublists and break them down into two. This process continues until there is only 1 element in each sublist.

And a list containing only 1 element, is a sorted list.

Merge Sort explained

It then takes two sublists and merge them together. Notice, that each of these sublists (in the first place, the sublists only contain 1 element each) are sorted.

Then it is effective to merge them together sorted. The algorithm looks at the first element of each sorted sublist and takes the smaller element first.

Merge Sort explained

This process continues all the way down.

Then the next row is taken of sublists. Again, the sample algorithm is used to merge them together. Take the smaller element of the two and add them to the new list. This continues.

Merge Sort explained

This process continues until we end up with one list.

Merge Sort explained

Which by magic (or the logic behind the algorithm) is sorted.

Time complexity

Well, we talked about it is one of the efficient sorting algorithm. That means it runs in O(N log(N)) time.

That means, if you have a list of N unsorted elements, it will take N log(N) operations.

Is that true for Merge Sort?

How many layers do you have in the algorithm?

Well, for each layer you half the size of each sublist. You can do that log(N) times.

For each layer, you do N comparisons. That results in N log(N) operations, hence, the O(N log(N)) time complexity.

The implementation of Merge Sort in Python

def merge_sort(my_list):
    if len(my_list) <= 1:
        return my_list

    mid = len(my_list)//2
    left_list = my_list[:mid]
    right_list = my_list[mid:]

    merge_sort(left_list)
    merge_sort(right_list)

    index_left = 0
    index_right = 0
    index_main = 0

    while index_left < len(left_list) and index_right < len(right_list):
        if right_list[index_right] < left_list[index_left]:
            my_list[index_main] = right_list[index_right]
            index_right += 1
            index_main += 1
        else:
            my_list[index_main] = left_list[index_left]
            index_left += 1
            index_main += 1

    while index_left < len(left_list):
        my_list[index_main] = left_list[index_left]
        index_left += 1
        index_main += 1

    while index_right < len(right_list):
        my_list[index_main] = right_list[index_right]
        index_right += 1
        index_main += 1


def main():
    my_list = [19, 56, 8, -6, -3, 27, -9, -29]
    print(my_list)
    merge_sort(my_list)
    print(my_list)


if __name__ == "__main__":
    main()

That is awesome.

Want to learn more about sorting. Check out the Insertion Sort, which is also one of the sorting algorithms you need to master. It is not as efficient, but it has one advantage you need to understand.

Master Insertion Sort and Implement it in Python

Understand the algorithm

Understand Insertion Sort in 6 Minutes

Insertion sort is an awesome algorithm that has some quite interesting use-cases due to it nature.

First understand the basics of it.

Consider the list of integers.

Insertion sort explained

The above list has 10 elements. Now if you only consider the list of the first element (number 3), then that part is actually sorted. If number 3 was the only number in the list, then it would be sorted.

Now consider the list of the first two elements.

Insertion sort explained

Then we have the list of two elements, 3 and 5. Here we are lucky, as this is still sorted.

Now for the next element, 1, we are a bit out of luck, because if we just added that element to part of the list (the green part), then it would not be sorted.

Insertion sort explained

If one was added, we would need to swap it down until it is on place. First swapping 1 with 5.

Insertion sort explained

Then swapping 1 with 3. Then we are having a ordered list again, that is a list of the 3 first elements (the green part of the list below).

Insertion sort explained

Hence, the process is to think of the first element as a list and realize it is ordered. Then insert one element at the time. For each element inserted, swap it down until it is on place.

Why is Insertion Sort great to know?

Well, first of all it is simple to understand. Second of all, it is easy to implement (see the code below). But that is not the exiting stuff.

Well, what is it?

It has some interesting use case. See.

  • If you were to keep an ordered list in memory as part of your program.
  • But it also had a twist.
  • Someone would keep coming with new numbers all the time, and still ask you to keep the list ordered.
  • Then the efficiency of keeping this list ordered is good.

Why?

Because if the list is ordered and you need to add another element, you just follow the above procedure.

The worst case run-time for a list of N element is O(N).

What is the catch?

Well, the performance of the algorithm is not good. That is, there are more efficient algorithms out there to sort N elements.

Let’s analyze.

For the first element you use 1 operation. The second element 2 operations. The third element 3 operations. Wait a minute, isn’t that this formula?

1 + 2 + 3 + 4 + … + N = N(N+1)/2

Which unfortunately gives a performance of O(N^2).

So why bother about this algorithm?

Well, because of the use-case explained above. That you can keep it ordered and with low cost insert an element into it.

The code

import random


def generate_random_list(n):
    # n is the length of the list
    my_list = []
    for i in range(n):
        my_list.append(random.randint(0, 4*n))
    return my_list


def insertion_sort(my_list):
    for i in range(1, len(my_list)):
        for j in range(i, 0, -1):
            if my_list[j] < my_list[j-1]:
                my_list[j], my_list[j-1] = my_list[j-1], my_list[j]
            else:
                break


def main():
    my_list = generate_random_list(20)
    print(my_list)
    insertion_sort(my_list)
    print(my_list)


if __name__ == "__main__":
    main()

The above code implements the algorithm (see only 7 lines of code) and tests it.

I hope you enjoyed.

Bubble Sort Explained, Implemented in Python and Time Complexity Analysis

What will we cover?

  • What is Bubble sort and how does it work?
  • How to implement it in Python
  • How to evaluate the time complexity of the Bubble sort algorithm

What is Bubble sort and how does it work?

In this video the Bubble sort algorithm is explained. On a high level, it takes an unsorted list and returns it sorted.

How to implement Bubble sort in Python

Well, notice the above description. It is straight forward and simple to implement. That is one of the beauties of the algorithm. Simple to understand. Simple to implement.

def bubble_sort(my_list):
    for i in range(len(my_list), 1, -1):
        for j in range(1, i):
            if my_list[j-1] > my_list[j]:
                my_list[j-1], my_list[j] = my_list[j], my_list[j-1]

An simple illustration of how you can use it.

import random


def generate_random_list(n):
    my_list = []
    for i in range(n):
        my_list.append(random.randint(0, n))
    return my_list


def bubble_sort(my_list):
    for i in range(len(my_list), 1, -1):
        for j in range(1, i):
            if my_list[j-1] > my_list[j]:
                my_list[j-1], my_list[j] = my_list[j], my_list[j-1]



my_list = generate_random_list(20)
print(my_list)
bubble_sort(my_list)
print(my_list)

Evaluating the performance of Bubble sort

First of the algorithm has two for-loops. An outer and an inner for-loop. With an input of a list of length N, the loops will be iterated the following number of times.

  • Outer for-loop: N times
  • Inner for-loop: N-1 + N-2 + N-3 + … + 1 + 0 times

Knowing the formula

  • N-1 + N-2 + … + 1 + 0 = (N-1)*N/2

We can add that the time complexity of Bubble sort is O(n^2).

But let’s try to experiment with real running data, to see if we can confirm that complexity.

To get run-times from the algorithm the cProfile library comes in handy. It is easy to use and gives good insights. A simple way to get run-times is to set it like this.

import random
import cProfile


def generate_random_list(n):
    return [random.randint(0, 4*n) for i in range(n)]


def bubble_sort(my_list):
    for i in range(len(my_list), 1, -1):
        for j in range(1, i):
            if my_list[j-1] > my_list[j]:
                my_list[j-1], my_list[j] = my_list[j], my_list[j-1]


def profile_bubble_sort(n):
    my_list = generate_random_list(n)
    bubble_sort(my_list)


cProfile.run("profile_bubble_sort(10000)")

It will result in an output in the following manner.

         56372 function calls in 11.056 seconds

   Ordered by: standard name

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000   11.056   11.056 <string>:1(<module>)
        1    0.000    0.000   11.055   11.055 BubbleSortProfiling.py:16(profile_bubble_sort)
        1    0.000    0.000    0.034    0.034 BubbleSortProfiling.py:5(generate_random_list)
        1    0.008    0.008    0.034    0.034 BubbleSortProfiling.py:6(<listcomp>)
        1   11.021   11.021   11.021   11.021 BubbleSortProfiling.py:9(bubble_sort)
    10000    0.010    0.000    0.022    0.000 random.py:200(randrange)
    10000    0.005    0.000    0.027    0.000 random.py:244(randint)
    10000    0.007    0.000    0.012    0.000 random.py:250(_randbelow_with_getrandbits)
        1    0.000    0.000   11.056   11.056 {built-in method builtins.exec}
        1    0.000    0.000    0.000    0.000 {built-in method builtins.len}
    10000    0.001    0.000    0.001    0.000 {method 'bit_length' of 'int' objects}
        1    0.000    0.000    0.000    0.000 {method 'disable' of '_lsprof.Profiler' objects}
    16364    0.003    0.000    0.003    0.000 {method 'getrandbits' of '_random.Random' objects}

You get the time spend in Bubble sort by looking the highlighted line. In the column cumtime you get the time spend in total in the function.

By collecting the run-time for various sizes of lists, we get the following graph.

The graph has a O(n^2) growth as expected.