• Category
  • >Python Programming

Dynamic Programming And DAC: Python Implementation and Examples

  • Tanesh Balodi
  • Sep 19, 2021
Dynamic Programming And DAC: Python Implementation and Examples title banner

One of the fundamental and most asked interview questions in the data structure is the difference between dynamic programming and DAC (Divide and Conquer). To be able to respond to this question, one needs to understand the logic as well as intuition behind these two methods of programming.

 

Both the methods are different approaches to solve a problem in programming and data structure, dynamic programming, and DAC tends to solve most of the programming problems, from Fibonacci series to finding the longest common subsequence and more. 

 

Some of the major problems that are solved by using either dynamic programming or DAC are;

 

  1. Knapsack problem

  2. Rod cutting problem

  3. Shortest and longest common subsequence

  4. Quicksort

  5. Merge sort

 

(Related blog: Implementing bubble sort and merge sort using python)

 

 

Introduction to Dynamic Programming

 

The basic intuition behind dynamic programming is to divide the problem into sub-parts and remember the result of the sub-parts and apply those results to similar sub-parts in the same problem. The success of any algorithm depends upon the amount of time and space it takes on a larger value, there can be hundreds of ways to design an algorithm, but to make it an ideal one, we need to find an algorithm that executes faster than every other algorithm for the same problem and Dynamic Programming paves the road for faster and efficient code.

 

Let us take an example to understand how dynamic programming works and how it makes an algorithm faster. Below, we have a code that performs Fibonacci numbers, let us analyze the python code in detail.


def fib(n):

    if n<2:

        return 1

    return fib(n-1) + fib(n-2)

First of all, a Fibonacci series starts from ‘1’ and says that the next number will be the sum of its previous two numbers, i.e the Fibonacci series goes like 0, 1, 1, 2, 3, 5, 8, 13,...., and so on. 

 

The code takes the value as ‘n’, as we can see that the first two numbers are 1 in the Fibonacci Series, therefore, if the value of ‘n’ is less than or equal to ‘2’, our code is returning 1, and for the else statement it is nothing but the addition of the previous two numbers. We can see the recursion in the return statement.

 

For someone who has learned recursion and has no knowledge of time complexity, this code seems to be fine. But the time complexity of the above code is ‘2n’, which means if the value of n is ‘40’, then the code will take a ‘240’ number of steps. So this code will take hours or maybe days for larger values of ‘n’. 

 

(Must catch: Random numbers in python)

 

Now, we need to implement dynamic programming to make the code efficient. We know that in dynamic programming we divide the problem and remember the result of that sub-part, and we use that result for a similar kind of problem. Let's see what that means, let us analyze fib(6).


The image is an example of fibonacci series.

Example of Fibonacci series 


The Fib of 6 will be distributed in the manner shown in the above image, If we analyze this recursive order of working, we can see that many of the ‘fib’ is repeating, for instance, ‘fib4’ occurs two times, ‘fib3’ occurs three times, and so on. 

 

We can observe so many repetitive values just on such a smaller value of ‘fib’, the dynamic programming says to store the result of each and every ‘fib’, so if we have solved a ‘fib4’ once, after that wherever a ‘fib4’ occurs, we will not solve it further, because we have already solved it and will simply put the value that we have stored as a result, this happens to reduce the time complexity to a large extent.

 

(Related blog: Working With Python JSON Objects)

 

 

Fibonacci series in python Using Dynamic Programming

 

This time we will store the values in the dictionary in python and we are going to check whether the value we are asking is already present in the dictionary or not, in this way we will save a lot of time that we were wasting on recalculating the values that have already been calculated.

 

nums = {}

def fibbo(n):

    if n<=2:

        return 1

    if n in nums:

        return nums[n]

    else:

        num = fibbo(n-1) + fibbo(n-2)

        nums[n] = num

        return num

 

The above method saves a lot of time, but what about the space complexity? It has to store so many values, which will take a lot of space. If we think about the problem clearly, we can understand what we need and what we don’t, if we can eliminate everything that we don’t need, we can attain an ideal solution, for example, in the Fibonacci series, we only need to know the previous two numbers for addition in order to know the next number.


#Fibonacci series

def fib(n):

    a = 1

    b = 1

    print(a)

    print(b)

    if n<2:

        return 1

    for i in range (2,n):

        c = a+b

 

        a = b

        b = c

        print(c)

 

The time complexity of the above method is o(n) whereas the space complexity is just a constant space o(1). We have just made sure that we have the previous two updated values and we have added them to find a new one.

 

 

Divide and conquer

 

Divide and Conquer is an approach to solve a large problem by breaking it into several sub-problems, and if the sub-problems are also looking large, divide it further, and later join or combine all the results.


The image is displaying the divide and conquer strategies.

Divide And Conquer Strategy


So the pseudo code is pretty simple, if the problem is small, there is no need to break it, simply solve it, otherwise, break the problem into similar sub-problems and solve them recursively and combine them at the end.

 

Divide and conquer can solve:

 

  1. Binary search

  2. Merge sort

  3. Quicksort

  4. And more

 

Let’s take binary search example

 

Binary Search Using Python and Divide and Conquer

 

Binary search is a simple searching technique which says that for a given sorted array(A), and an element(x), find the position of ‘x’ in ‘A’.

 

2

5

11

19

32

44

57

92

98

 

Indexes are from 0 to 8, now if we want to find 92, we simply have to search it from index 0. Therefore, a simple for loop can solve this problem:


A = [9,10,20,30,40,50,60,20,80]

n = len(A)

 

def BS(A,n,x):

    for i in range(0,n):

        if A[i]==x:

            return i

    return -1

In this case, if the number is present at the first index, then it will be the best case and the comparison will be only 1, the worst case could be that the element is not present in the array, in this case there will be ‘n’ number of comparisons. Therefore, in the worst case the time complexity is o(n).

 

(Recommended: DATA TYPES in Python)

 

But the above code will work on unsorted lists too, however, the number of comparisons is an issue. Let's think about the problem again, do we really have to do these many comparisons? The answer is No. Let us see how we can solve this problem with far fewer comparisons using divide and conquer.

 

Let us take the previous array, and say that we need to find 19 in the array.

 

2

5

11

19

32

44

57

92

98

 

NOTE: the list must be sorted in this method

 

If we have to find any number in an array, just compare it with the middle element, if the middle element is equal to the number we need to find, we will simply return it, if the number is less than mid, then we will only compare it with the numbers less than mid, if the number is more than mid, then we will only compare it with the numbers more than mid.

 

CASE 1: The element ‘x’ is equal to middle element

CASE 2: The element ‘x’ is less than middle element

CASE 3: The element ‘x’ is more than middle element

 

Therefore, in our case the number is 19, we know that it is less than the middle element. The number 19 is less than the number in the middle, therefore we will again find the middle element from 0th. Index to the middle of the whole array, and this way we continue our search.

 

Let’s discuss the code for the above problem.


A = (9,10,20,30,40,50,60,20,80)

n = len(A)

 

def BS(A,n,x):

    i = 0

    j = n-1

    while i<=j:

        mid = (i+j)//2

        

        if A[mid]==x:

            return mid

        

        elif x<A[mid]:

            j = mid - 1

            

        else:

            i = mid + 1

    return -1

In the above code, ‘i’ represents the starting index whereas ‘j’ represents the last index, the loop will work till starting index is less than or equal to the last one, the loop finds the middle element of the array and if the middle element is equal to the element ‘x’, we will simply return the index of the middle element, if it is less than the middle element, then the last index will the one less than the middle, otherwise if the number we are searching is greater than the middle, we will update the start index as one greater than middle index.

 

(Also read: What is the Difference Between SAS and Python?)

 

The problems are getting divided by half in each step, therefore if the problem size is of ‘n’, then it goes like n, n/2,n/4….1. Which means the time complexity for the worst case is not o(n) this time, but it becomes o(logn).

 

 

Conclusion

 

We have discussed the two of the most used methods of programming, Divide and Conquer as well as Dynamic Programming. We have learned the implementation of dynamic programming with respect to the time complexity. Later on we worked with a binary search problem and solved it efficiently using Divide and Conquer method.

Latest Comments