Master Selection Sort in Python in 3 Steps

What will we cover in this tutorial

Selection sort is one of the simplest sorting algorithms, which is a good algorithm to start with. While the algorithm is considered to be slow, it has the advantage of not using auxiliary space.

Step 1: Understand the Selection Sort algorithm

The goal of sorting is to take an unsorted array of integers and sort it.

Example given below.

[97, 29, 53, 92, 42, 36, 12, 57, 90, 76, 85, 81, 12, 61, 45, 3, 83, 34, 7, 48]
to
[3, 7, 12, 12, 29, 34, 36, 42, 45, 48, 53, 57, 61, 76, 81, 83, 85, 90, 92, 97]

The algorithm is the most intuitive way of sorting a list.

It works as follows.

  1. Go through the list to be sorted and find the smallest element.
  2. Switch the smallest element with the first position.

If you started with the following list.

[97, 29, 53, 92, 42, 36, 12, 57, 90, 76, 85, 81, 12, 61, 45, 3, 83, 34, 7, 48]

You would now have this list.

[3, 29, 53, 92, 42, 36, 12, 57, 90, 76, 85, 81, 12, 61, 45, 97, 83, 34, 7, 48]

Notice, that now we have the smallest element in the front of the list, we know that the second smallest element must be somewhere in the list starting from the second position all the way to the end.

Hence, you can repeat step the above 2 steps on the list excluding the first element.

This will give you the following list.

[3, 7, 53, 92, 42, 36, 12, 57, 90, 76, 85, 81, 12, 61, 45, 97, 83, 34, 29, 48]

Now we have that the first two elements are sorted, while the rest of the list is not sorted.

Hence, we can repeat the two steps again on the unsorted part of the list.

If we continue this until the we reach the end of the list. This should give us a sorted list.

Step 2: Implementation of Selection Sort

A beautiful thing about Selection Sort is that it does not use any auxiliary memory. If you are new to sorting, then this can be a big advantage if sorting large data sets.

The disadvantage of Selection Sort is the time complexity.

We will come back to that later.

The code of Selection Sort can be done in the following manner.

def selection_sort(list_to_sort):
    for i in range(len(list_to_sort)):
        index_of_min_value = i
        for j in range(i + 1, len(list_to_sort)):
            if list_to_sort[j] < list_to_sort[index_of_min_value]:
                index_of_min_value = j
        list_to_sort[i], list_to_sort[index_of_min_value] = list_to_sort[index_of_min_value], list_to_sort[i]

list_to_sort = [97, 29, 53, 92, 42, 36, 12, 57, 90, 76, 85, 81, 12, 61, 45, 3, 83, 34, 7, 48]
selection_sort(list_to_sort)
print(list_to_sort)

This will produce the correct output.

[3, 7, 12, 12, 29, 34, 36, 42, 45, 48, 53, 57, 61, 76, 81, 83, 85, 90, 92, 97]

Step 3: The time complexity of Selection Sort algorithm

Now this is the sad part of this simple algorithm. It does not perform good. A sorting algorithm is considered efficient if it runs in O(n log(n)), which Selection Sort does not.

The simple time complexity analysis is as follows.

Assume we have a list of n unsorted integers. Then the first iteration of the list will make n – 1 comparisons, the second iteration will make n – 2 comparisons, and so forth all the way down to 1 comparison.

This is the sum of 1 to n – 1, which is found by this formula (n – 1)(n – 2)/2, which is O(n^2).

Other than that the algorithm does n swapping of numbers. This is O(n).

This combines the algorithm to O(n + n^2) = O(n^2).

Next Step

This should wake your appetite to understand how you can make more efficient sorting.

Another good example of a simple sorting algorithm is the Insertion Sort algorithm.

For more efficient algorithm you should check out the Merge Sort algorithm.

If you want to be serious about sorting, check out my online course on the subject.

Create a Max-Heap with a Randomized Algorithm in Python

What will we cover in this tutorial?

We will create a heap, or more specifically, a max-heap. A max-heap is a tree structure where the node value of every parent is greater or equal to the children.

In this tutorial we will implement a max-heap with a binary tree and use a randomized approach to keep it balanced.

You might be wondering why to make it randomized. Simply, said, to keep it simple and keep operations on average O(log(n)).

Step 1: Recall what a max-heap is

A max-heap is a tree structure where the node value of every parent is greater or equal to the children.

Example of a max-heap

A heap will have two primary functions.

  1. Insert an element and still keep the max-heap structure.
  2. Get and remove maximum element (the element at the root) and still keep the max-heap structure.

The goal is to be able to do these operations in O(log(n)) time.

Step 2: Understand what a randomized algorithm can do for you

Randomized algorithms help you achieve great performance on average while keeping the algorithms simple.

To get keep the operations of a heap at worst-case O(log(n)), you need to keep the binary tree structure balanced. This requires complex ways to ensure that.

Instead, just put in the leaves in the three randomly, and you will get the same result with very high probability. Hence, you will end up with an average time of O(log(n)) for the operations.

Step 3: Insert into a max-heap

Now to the fun part. The code.

Let’s start simple and create a Node to represent the nodes in the binary tree, which will represent the max-heap.

class Node:
    def __init__(self, element):
        self.element = element
        self.left = None
        self.right = None

The node needs to be able to keep the element (which should be comparable), and a left and right child.

From the above Node class you can create an arbitrary binary tree.

The max-heap insert function can be implemented by a recursive and randomized approach in the following manner.

import random

class Heap:
    def __init__(self):
        self.head = None
    def _insert(self, element, node):
        # if element is larger then node.element, switch
        if element > node.element:
            element, node.element = node.element, element
        # check if available node
        if node.left is None:
            node.left = Node(element)
            return
        if node.right is None:
            node.right = Node(element)
            return
        # Choose a random node (here is the randomness hidden)
        if random.randint(0, 1) == 0:
            self._insert(element, node.left)
        else:
            self._insert(element, node.right)
    def insert(self, element):
        if self.head is None:
            self.head = Node(element)
        else:
            self._insert(element, self.head)

The function insert(…) checks for the special case, if there are no nodes in the binary tree so far and inserts if so. Otherwise, it will forward the call to the recursive and randomized function _insert(….), which also takes the head (root) of the tree as argument.

A recursive function in this case could be at any node, but starting from the head (root) node. It will do the same all the way down.

  1. Check if element of node is smaller than element to insert. If so, switch them.
  2. Check if node has a free child (left or right). If so, use it to insert new node with element and return.
  3. If none of the above, choose a random child (left or right) and call recursive down.

That is it. It will most likely create a well balanced binary tree.

See example here.

               +---------------36---------------+               
       +-------29-------+              +-------34-------+       
   +---27---+      +---20---+      +---32---+      +---33---+   
 +- 3-+  +-13-+  +- 6-+  +- 2-+  +-24-+  +-31-+  +-25-+  +-25-+ 
  0   1           4              16               6             

In simple ascii representation of the binary tree representing the max-heap. That the binary tree keeps a balance like that ensures that the insertion will be O(log(n)) on average.

Step 4: Delete the maximum element from the heap (and return it)

Deleting the maximum element will remove the root (head) of the binary tree. Then we need to take the larger child and move it up. That obviously makes an empty space in the child below. Hence, we need to do the same operation below.

This sounds recursive, doesn’t it?

import random

class Heap:
    def __init__(self):
        self.head = None
    def _insert(self, element, node):
        # if element is larger than node.element, switch
        if element > node.element:
            element, node.element = node.element, element
        # check if available node
        if node.left is None:
            node.left = Node(element)
            return
        if node.right is None:
            node.right = Node(element)
            return
        if random.randint(0, 1) == 0:
            self._insert(element, node.left)
        else:
            self._insert(element, node.right)
    def insert(self, element):
        if self.head is None:
            self.head = Node(element)
        else:
            self._insert(element, self.head)
    def get_max(self):
        return self.head.element
    def _delete_max(self, node):
        if node.left is None and node.right is None:
            return None
        if node.left is None:
            return node.right
        if node.right is None:
            return node.left
        if node.right.element > node.left.element:
            node.element = node.right.element
            node.right = self._delete_max(node.right)
            return node
        else:
            node.element = node.left.element
            node.left = self._delete_max(node.left)
            return node
    def delete_max(self):
        if self.head is None:
            return None
        max_element = self.head.element
        self.head = self._delete_max(self.head)
        return max_element

The delete_max function takes care of the special case where there are no elements (or nodes) in the binary tree. Then it takes the largest element and calls the recursive _delete_max(…) function with the head (root) as argument.

The _delete_max(…) function does the following.

  1. Checks for special case where node has no children. If so, return None.
  2. Check if one child is not there, if so return the existing child.
  3. Otherwise, take the child with the larger element. Take the larger element and assign it to node (remember, we have removed the element form the calling node), and call recursive down with larger child on _delete_max(…) and assign result to larger child node.

That can be a bit confusing at first. But try it out.

This operation also only has O(log(n)) performance on average. And as elements are put randomly, then removing them in order (maximum elements), will remove elements randomly and keep the binary tree balanced on average case.

Step 5: The full code and a simple print function of the tree

The full code can be found here.

import random

class Node:
    def __init__(self, element):
        self.element = element
        self.left = None
        self.right = None

class Heap:
    def __init__(self):
        self.head = None
    def _insert(self, element, node):
        # if element is larger than node.element, switch
        if element > node.element:
            element, node.element = node.element, element
        # check if available node
        if node.left is None:
            node.left = Node(element)
            return
        if node.right is None:
            node.right = Node(element)
            return
        if random.randint(0, 1) == 0:
            self._insert(element, node.left)
        else:
            self._insert(element, node.right)
    def insert(self, element):
        if self.head is None:
            self.head = Node(element)
        else:
            self._insert(element, self.head)
    def get_max(self):
        return self.head.element
    def _delete_max(self, node):
        if node.left is None and node.right is None:
            return None
        if node.left is None:
            return node.right
        if node.right is None:
            return node.left
        if node.right.element > node.left.element:
            node.element = node.right.element
            node.right = self._delete_max(node.right)
            return node
        else:
            node.element = node.left.element
            node.left = self._delete_max(node.left)
            return node
    def delete_max(self):
        if self.head is None:
            return None
        max_element = self.head.element
        self.head = self._delete_max(self.head)
        return max_element
    def _get_depth(self, node):
        if node is None:
            return 0
        left = self._get_depth(node.left)
        right = self._get_depth(node.right)
        if left > right:
            return 1 + left
        else:
            return 1 + right
    def get_depth(self):
        return self._get_depth(self.head)
    def _print_heap(self, current_level, request_level, depth, node):
        characters_per_level = 4*2**depth
        characters_per_node = characters_per_level // (2**(current_level + 1))
        if current_level == request_level:
            if node is not None:
                space_fill = characters_per_node // 4 - 1
                if request_level == depth - 1:
                    print(' '*space_fill + ' ' + ' '*space_fill + f'{node.element:2d}' + ' '*space_fill + ' ' + ' '*space_fill, end='')
                else:
                    print(' '*space_fill + '+' + '-'*space_fill + f'{node.element:2d}' + '-'*space_fill + '+' + ' '*space_fill, end='')
            else:
                print(' '*characters_per_node, end='')
        else:
            if node is not None:
                self._print_heap(current_level + 1, request_level, depth, node.left)
                self._print_heap(current_level + 1, request_level, depth, node.right)
            else:
                self._print_heap(current_level + 1, request_level, depth, None)
                self._print_heap(current_level + 1, request_level, depth, None)
    def print_heap(self):
        depth = self._get_depth(self.head)
        for i in range(depth):
            self._print_heap(0, i, depth, self.head)
            print()

Notice that the print function also is recursive.

Binary Search Explained and Implemented in Python

Understand binary serach

The whole idea behind binary search is that you can take advantage of having the list you search in ordered.

Binary search explained

Say, we need to search for 7 and we look at the element in the middle of the list.

Binary search explained

Then we can conclude, in this example, that 7 is not part of the right side of the list, as all numbers must be greater than 10. That is because the list is ordered.

Next we ask the in the middle of the left side of the list which is left unknown is 7 is there.

Binary search explained

As -9 is less than 7, we know that 7 cannot be before in the list and are left with the remaining element between -9 and 10.

Binary search explained

As -3 is less than 7, we know that if 7 is part of the list, then it must be to the right of -3. Also, we know it must be before 10 (our first comparison).

Binary search explained

Hence, it can only be in the last spot left. But as it is 8, we now know that 7 is not part of the list.

Why is that impressive?

Consider if the list was unsorted. Then you would have to look through the entire list to make sure 7 was not part of it.

Binary search explained

In terms of complexity that means if the list contains N element, it must make N comparisons to search of an element. That is O(N) time complexity.

The binary search on the other hand is way more efficient. For each comparison the algorithm can skip one half of the list. That is O(log(N)) time complexity.

The source code

def recursive_binary_search(my_list, element):
    return recursive_binary_search_internal(my_list, element, 0, len(my_list) - 1)

def recursive_binary_search_internal(my_list, element, low, high):
    if low > high:
        return False
    else:
        mid = (low + high)//2
        if my_list[mid] == element:
            return True
        else:
            if my_list[mid] > element:
                return recursive_binary_search_internal(my_list, element, low, mid - 1)
            else:
                return recursive_binary_search_internal(my_list, element, mid + 1, high)

def binary_search(my_list, element):
    low = 0
    high = len(my_list) - 1
    while low <= high:
        mid = (low + high)//2
        if my_list[mid] == element:
            return True
        else:
            if my_list[mid] > element:
                high = mid - 1
            else:
                low = mid + 1
    return False

def main():
    my_list = [-29, -16, -15, -9, -6, -3, 8, 10, 17, 19, 27, 47, 54, 56, 60]
    print(my_list)
    element = 56
    print("Binary Search:", binary_search(my_list, element), element)
    print("Recursive Binary Search:", recursive_binary_search(my_list, element), element)

if __name__ == "__main__":
    main()

There are two implementations of the binary search in the above example.

Want to learn how to sort a list?

For the insertion sort read this tutorial. For the merge sort read this tutorial.

A Simple Implementation of Merge Sort in Python

Understand Merge Sort in 6 Minutes

Understand Merge Sort in 6 Minutes

Merge sort is one of the algorithms you need to master. Why?

Because it is in the class of efficient algorithms and is easy to understand.

But what does efficient mean?

Let’s get back to that. First how does Merge Sort work?

Merge Sort explained

It takes the list and breaks it down into two sub lists. Then it takes these sublists and break them down into two. This process continues until there is only 1 element in each sublist.

And a list containing only 1 element, is a sorted list.

Merge Sort explained

It then takes two sublists and merge them together. Notice, that each of these sublists (in the first place, the sublists only contain 1 element each) are sorted.

Then it is effective to merge them together sorted. The algorithm looks at the first element of each sorted sublist and takes the smaller element first.

Merge Sort explained

This process continues all the way down.

Then the next row is taken of sublists. Again, the sample algorithm is used to merge them together. Take the smaller element of the two and add them to the new list. This continues.

Merge Sort explained

This process continues until we end up with one list.

Merge Sort explained

Which by magic (or the logic behind the algorithm) is sorted.

Time complexity

Well, we talked about it is one of the efficient sorting algorithm. That means it runs in O(N log(N)) time.

That means, if you have a list of N unsorted elements, it will take N log(N) operations.

Is that true for Merge Sort?

How many layers do you have in the algorithm?

Well, for each layer you half the size of each sublist. You can do that log(N) times.

For each layer, you do N comparisons. That results in N log(N) operations, hence, the O(N log(N)) time complexity.

The implementation of Merge Sort in Python

def merge_sort(my_list):
    if len(my_list) <= 1:
        return my_list
    mid = len(my_list)//2
    left_list = my_list[:mid]
    right_list = my_list[mid:]
    merge_sort(left_list)
    merge_sort(right_list)
    index_left = 0
    index_right = 0
    index_main = 0
    while index_left < len(left_list) and index_right < len(right_list):
        if right_list[index_right] < left_list[index_left]:
            my_list[index_main] = right_list[index_right]
            index_right += 1
            index_main += 1
        else:
            my_list[index_main] = left_list[index_left]
            index_left += 1
            index_main += 1
    while index_left < len(left_list):
        my_list[index_main] = left_list[index_left]
        index_left += 1
        index_main += 1
    while index_right < len(right_list):
        my_list[index_main] = right_list[index_right]
        index_right += 1
        index_main += 1

def main():
    my_list = [19, 56, 8, -6, -3, 27, -9, -29]
    print(my_list)
    merge_sort(my_list)
    print(my_list)

if __name__ == "__main__":
    main()

That is awesome.

Want to learn more about sorting. Check out the Insertion Sort, which is also one of the sorting algorithms you need to master. It is not as efficient, but it has one advantage you need to understand.