Given an array (that is a list) of integers, for each element find all the nearest element smaller on the left side of it.
The naive solution has time complexity O(n^2). Can you solve it in O(n)? Well, you need to have a Stack to do that.
The naive solution is for each element to check all the elements on the left of it, to find the first one which is smaller.
The worst case run time for that would be O(n^2). For an array of length n, it would take: 0 + 1 + 2 + 3 + … + (n-1) comparisons. = (n-1)*n/2 = O(n^2) comparisons.
How a Stack can help you solve the Balancing Bracket Problem efficiently?
What is the time complexity and do our implantation have that performance?
What is the Balancing Bracket Problem and how do you solve it?
See the video below to get a good introduction to the problem.
How to solve the problem in Python
You need a stack. You could use a Python list as a stack, while the append and pop last element in the Python list are amortised O(1) time, it is not guaranteed to get the performance you need.
Implementing your own stack will give the the worst case O(1) time complexity. So let us begin by implementing a Stack in Python.
It is more simple than you think.
class Node:
def __init__(self, element=None, next_node=None):
self.element = element
self.next_node = next_node
class Stack:
def __init__(self):
self.stack = None
def push(self, element):
self.stack = Node(element, self.stack)
def pop(self):
self.stack = self.stack.next_node
def is_empty(self):
return self.stack is None
Then given that stack solving the Balancing Bracket Problems becomes easy.
def balancing_bracket(s):
stack = Stack()
for c in s:
if c in "([{":
stack.push(c)
elif c in ")]}":
if stack.is_empty():
return False
e = stack.pop()
if e == "(" and c == ")":
continue
elif e == "[" and c == "]":
continue
elif e == "{" and c == "}":
continue
else:
return False
else:
continue
if not stack.is_empty():
return False
else:
return True
Time complexity analysis of our solution
Well, the idea with the solution is that it should be O(n), that is, linear in complexity. That means, that a problem of double size should take double time to solve.
The naive solution takes O(n^2), which means a problem of double size takes 4 times longer time.
But let us try to investigate the time performance of our solution. A good tool for that is the cProfile library provided by Python.
But first we need to be able to create random data. Also notice, that the random data we create should be balancing brackets to have worst case performance on our implementation.
To generate random balancing brackets string you can use the following code.
import random
def create_balanced_string(n):
map_brackets = {"(": ")", "[": "]", "{": "}"}
s = Stack()
result = ""
while n > 0 and n > s.get_size():
if s.is_empty() or random.randint(0, 1) == 0:
bracket = "([{"[random.randint(0, 2)]
result += bracket
s.push(bracket)
else:
result += map_brackets[s.pop()]
n -= 1
while not s.is_empty():
result += map_brackets[s.pop()]
return result
Back to the cProfile, which can be called as follows.
Where we find our run-time on the highlighted line. It is interesting to notice, that the main time is spend creating a string. And diving deeper into that, you can see, that it is the calls to create random integers that are expensive.
Well, to figure out whether our code is linear in performance, we need to create data points for various input sizes.