Dynamic Programming (DP) – A Beginner’s Guide

Updated on October 17, 2024

Article Outline

Dynamic Programming (DP) is a method of solving complex problems by dividing them into more manageable subproblems. Beginners may find it difficult, but once the basic idea is known, it can completely transform the process of problem-solving. This blog strives to help you make Dynamic Programming understandable by laying out the basics and teaching you how to think in a way that would make these problems intuitive to solve.

 

In this blog post, we will explain what Dynamic Programming is all about and when one should use it. Besides, we are going to discuss different methods along with their comparisons against recursion or greedy algorithms. Also, examples will be given as well as practice exercises provided at the end for better understanding.

What is Dynamic Programming?

Dynamic Programming (DP) is a method in computer science and mathematics designed to solve problems by breaking them down into smaller, simpler subproblems. The main point here is that each subproblem needs to be solved only once before its solution can be stored somewhere. Therefore as soon as this particular subproblem comes up again when solving the next main problem next, then one doesn’t need to recompute it afresh. Rather, get its value from storage, thus cutting down on repeating calculations which consume much time. It optimises solutions making them more efficient and faster.

 

Dynamic programming works well when the same subproblem occurs multiple times in different parts of a large problem called overlapping subproblems. Instead of solving every small problem many times over again through sequential computation, we solve all of them once at the initial stage and then store their values in some form such as a table or some other data structure. It helps us to get any other answer needed anytime later on without having to recalculate it again from scratch, hence reducing both the time complexity and overall complexity of our algorithm.

 

There are situations where a problem has stages which are divided into different possible states, whereby each state has several alternatives. The solution is then made up of the combinations of these subproblems in a systematic manner. Dynamic Programming is frequently used for solving complex problems, including shortest path on graphs, optimising resources and puzzles.

*Image
Get curriculum highlights, career paths, industry insights and accelerate your technology journey.
Download brochure

When to Use Dynamic Programming?

Dynamic programming (DP) does not apply to all kinds of problems. It works best when applied to particular scenarios where certain characteristics are exhibited by problems. To help in identifying situations where DP should be used, explore these characteristics and illustrate with examples as below.

1. Overlapping Subproblems

Subproblems overlap happen when some part of any task recurs in more than another subtask within the same question. Instead of solving the same subproblem repeatedly, DP allows us to solve it once, store the solution, and reuse it whenever needed. This characteristic is key to the efficiency of DP.

 

  • Example: For instance let us take the Fibonacci Sequence where the nth number is generated by adding the previous two numbers together i.e. f(n) = f(n-1) + f(n-2). If we write simple recursive code to calculate Nth Fibonacci then at every call computing fib(N-1), fib(N-2) and so on up to 0th term gets calculated multiple times which results in an exponential time complexity. If we use the memoization technique for storing already found values, it can bring down order growth rate from O(2^n) to O(N).

2. Optimal Substructure

A problem has an optimal substructure if an optimal solution to the problem can be constructed efficiently from optimal solutions of its subproblems. In other words, solving the subproblems optimally guarantees that the overall problem will also be solved optimally.

3. Problems with Recursive Nature

DP is useful for solving problems that can be solved using recursive formulas. These types of problems usually involve making successive decisions where each step relies on outcomes from smaller subproblems.

 

  • Example: The Knapsack problem is a classic example. The problem involves choosing items to maximise total value without exceeding the weight limit. By breaking the problem down into smaller decisions whether to include a particular item or not, you can use DP to store the best solution for each weight limit. It gradually builds up the optimal solution for the entire problem.

How Does Dynamic Programming Work?

Dynamic Programming (DP) encompasses breaking down a problem into smaller subproblems that are manageable, solving them once and storing their solutions. In this way, we avoid redoing similar calculations, thus enhancing the efficiency of the algorithm. Let us take DP step by step:

1. Identify the Subproblems

The first step towards employing DP is identifying an original problem’s subproblems. Sub-problems should be scaled-down versions of the actual problem at hand. For instance, in the Fibonacci sequence, the previous Fibonacci numbers’ calculations act as its Sub-Problems. To find the 10th Fibonacci number, we must calculate the 9th and 8th Fibonacci numbers before.

2. Decide on your Approach: Top-Down or Bottom-Up

Two main approaches exist when it comes to implementing DP. These are Top-Down and Bottom-Up.

 

a. Top-Down(Memoization): In this approach, we start solving the problem from the top (main problem), working our way down up to the smallest sub-problems. Our table(or memo) stores computed results of sub-problems. If ever such a case arises again just retrieve the result stored within, hence no need for re-computation.

  • Example: Suppose you’re trying to find fib(5) through recursion. To do this, you have to calculate fib(4) and fib(3). You won’t have to compute fib(4) and fib(3) twice if you store their results for future needs during the calculation of fib(6).

b. Bottom-Up(Tabulation): This approach however takes the opposite direction because we initially address the smallest sub-problems whose solutions are employed in resolving greater ones. This method involves filling out a table in a systematic way until the solution to the main problem is found.

3. Store the Solutions

After solving a sub-problem we save its solution in some sort of data structure such as an array, matrix or hash table. Since no need to recalculate the same results again, it increases the efficiency of the algorithm.

4. Construct the Solution to the Original Problem

Lastly, after having solved all subproblems and kept their solutions, one can apply them towards building up a final response to the primary problem. The combination of these stored outcomes helps us come up with an efficient way of finding out what exactly answers our primary problem.

How to solve a Dynamic Programming Problem?

Let’s consider the problem of finding the nth Fibonacci number. The Fibonacci sequence is a series where each number is the sum of the two preceding ones, starting from 0 and 1. The problem statement can be defined as:

 

Problem Statement: Given an integer n, find the nth Fibonacci number. The Fibonacci sequence is defined as:

  • Fib(0) = 0
  • Fib(1) = 1
  • Fib(n) = Fib(n-1) + Fib(n-2) for n > 1

 

Now, let’s explore how to approach this problem, starting with the naive recursive solution and then gradually improving it with Dynamic Programming.

Step 1: Start with the Brute Force Recursive Solution

The most straightforward way to solve the Fibonacci problem is by using a recursive function that directly follows the definition of the sequence. However, this approach is not efficient. Let’s walk through it to understand why.

 

Brute Force Algorithm

 

  1. Base Cases: If n is 0, return 0. If n is 1, return 1.
  2. Recursive Case: If n is greater than 1, recursively calculate Fib(n-1) and Fib(n-2) and return their sum.

 

Python Code (Brute Force)

def fib(n): if n == 0: return 0 elif n == 1: return 1 return fib(n-1) + fib(n-2)  n = 5 print(f"Fib({n}) = {fib(n)}")  # Output: 5

 

For Fib(5), the function will call fib(4) and fib(3). To compute fib(4), it calls fib(3) and fib(2). Notice that fib(3) is computed twice, and similarly, other values are recalculated multiple times. This leads to an exponential time complexity of O(2^n).

Step 2: Identify the Problem with Redundant Calculations

In the brute force approach, the same subproblems (like Fib(3) and Fib(2)) are solved multiple times. This redundancy makes the solution inefficient. To optimise this, we need to store the results of subproblems as we compute them. This is where Dynamic Programming comes into play.

Step 3: Think in Terms of Subproblems and Overlapping Subproblems

Before implementing a DP solution, ask yourself:

 

  • What are the subproblems? In this case, Fib(n) depends on Fib(n-1) and Fib(n-2).
  • Are these subproblems overlapping? Yes, because the same Fibonacci numbers are computed multiple times.
  • Can I store the results of these subproblems to avoid redundant work? Yes, and this is exactly what we’ll do in the DP approach.

Step 4: Implement the Top-Down Approach (Memoization)

The Top-Down approach starts from the original problem and breaks it down into smaller subproblems, storing the results of these subproblems as you go.

 

Step-by-Step Process

 

  1. Base Cases: If n is 0 or 1, return n.
  2. Check for Stored Results: Before calculating Fib(n), check if it’s already been computed and stored.
  3. Recursive Calculation: If not already computed, calculate Fib(n-1) and Fib(n-2), store the results, and then return their sum.

 

Python Code (Top-Down with Memoization)

 

def fib_memo(n, memo={}): if n in memo:  # Check if already computed return memo[n] if n == 0:  # Base case return 0 elif n == 1:  # Base case return 1 memo[n] = fib_memo(n-1, memo) + fib_memo(n-2, memo)  # Store the result return memo[n]  n = 5 print(f"Fib({n}) = {fib_memo(n)}")  # Output: 5

 

Example Input Walkthrough

 

  • Start with fib_memo(5).
  • Calculate fib_memo(4) and fib_memo(3).
  • Continue breaking down until you reach fib_memo(1) and fib_memo(0), which are base cases.
  • Store the results as you return from the recursive calls.
  • Use the stored results to compute the higher values efficiently.

 

Analysis

 

  • Time Complexity: O(n) – Each subproblem is solved once.
  • Space Complexity: O(n) – Due to the memoization storage and the recursion stack.

Step 5: Implement the Bottom-Up Approach (Tabulation)

The Bottom-Up approach works by solving the smallest subproblems first and building up the solution for the original problem.

 

Step-by-Step Process

 

  1. Initialise the Table: Start with an array where fib[0] = 0 and fib[1] = 1.
  2. Iterate to Fill the Table: For each subsequent Fibonacci number, use the previous two numbers to calculate it.
  3. Return the Final Result: The last entry in the table will be Fib(n).

 

Python Code (Bottom-Up with Tabulation)

def fib_tab(n): if n == 0: return 0 elif n == 1: return 1  fib_table = [0] * (n + 1) fib_table[0] = 0 fib_table[1] = 1  for i in range(2, n + 1): fib_table[i] = fib_table[i-1] + fib_table[i-2]  # Use previous results  return fib_table[n] n = 5 print(f"Fib({n}) = {fib_tab(n)}")  # Output: 5

 

Example Input Walkthrough

 

  • Start by initialising fib_table[0] = 0 and fib_table[1] = 1.
  • Calculate fib_table[2] by adding fib_table[1] and fib_table[0].
  • Continue this process until you fill up fib_table[n].

 

Analysis

 

  • Time Complexity: O(n) – The loop runs n-1 times.
  • Space Complexity: O(n) – Space is needed to store the Fibonacci numbers.

Recursion vs Dynamic programming

Aspect Recursion Dynamic Programming
Approach Breaks down a problem into subproblems, but often solves the same subproblems multiple times. Break down a problem into subproblems and store their solutions to avoid redundant work.
Time Complexity Often exponential (e.g., O(2^n) for Fibonacci). Usually polynomials (e.g., O(n) for Fibonacci with DP).
Space Complexity Depends on the recursion depth (O(n) for the recursion stack). Depends on the storage used for memoization or tabulation (O(n) for storing results).
Solution Reuse Do not reuse the solutions of subproblems. Reuses the solutions of subproblems to optimise the process.
Overlapping Subproblems Leads to redundant calculations. Avoids redundant calculations by storing and reusing subproblem results.
Use Cases Suitable for problems without overlapping subproblems or when the problem size is small. Suitable for problems with overlapping subproblems and optimal substructure.

Tabulation vs Memoization

Aspect Tabulation (Bottom-Up) Memoization (Top-Down)
Approach Starts from the smallest subproblems and works up to the main problem. Starts with the main problem and breaks it down into smaller subproblems, storing the results as it goes.
Direction Iterative, usually uses loops. Recursive, uses function calls.
Order of Sub-Problem Solving Solves all smaller subproblems first. Solves subproblems as needed, caching results for reuse.
Space Complexity Requires space for a table to store solutions of all subproblems (O(n)). Requires space for storing solutions and the recursion stack (O(n)).
Implementation Complexity Can be simpler to implement as it avoids recursion. Can be more intuitive for those comfortable with recursion, but may involve managing recursion stack depth.
Performance Slightly faster in practice due to a lack of function call overhead. Can be easier to implement but might have more overhead due to recursive calls.

 

Greedy Approach vs Dynamic Programming

 

Aspect Greedy Approach Dynamic Programming
Approach Make the locally optimal choice at each step, hoping to find a global optimum. Solves problems by breaking them down into subproblems and combining their solutions to find the global optimum.
Optimal Solution Does not always guarantee a globally optimal solution. Guarantees a globally optimal solution if the problem has an optimal substructure.
Use Cases Suitable for problems where local optimal choices lead to a global optimum (e.g., Dijkstra’s algorithm). Suitable for problems with overlapping subproblems and an optimal substructure (e.g., Knapsack problem).
Time Complexity Typically faster, often O(n) or O(n log n), since it doesn’t explore all possible solutions. Typically slower, often O(n^2) or higher, as it explores many subproblems and combines their solutions.
Problem Types Works well for optimization problems where local decisions lead to a global solution. Works well for complex problems where multiple solutions need to be combined to form the optimal solution.
Examples Coin Change problem (finding the minimum number of coins for a given amount using a greedy approach may not always yield the correct answer). Coin Change problem (finding the minimum number of coins using DP guarantees the correct answer).

Advantages of Dynamic Programming

Dynamic programming (DP) provides a range of advantages, especially if we are dealing with problems containing overlapping subproblems and an optimal structure. Below are the main advantages:

 

  • Efficiency: DP has a significant impact on time complexity by avoiding unnecessary calculations and thus is suited for problems whose time complexity would be exponential.
  • Optimal Solutions: DP guarantees that the resultant solutions are optimised by solving sub-problems systematically and then combining their results.
  • Versatility: DP can be used to solve different types of problems ranging from optimization to combinatorial ones, hence making it powerful in designing algorithms.

Applications of Dynamic Programming

There are various areas where dynamic programming (DP) is applied to solve intricate problems efficiently. These include:

 

  • Optimization Problems: When maximising or minimising a value under specific constraints, such as the Knapsack problem, dynamic programming plays a crucial role.
  • String Processing: For instance, Longest Common Subsequence (LCS) and Edit Distance use DP to compare strings quickly and process them.
  • Pathfinding Algorithms: Dijkstra’s Algorithm and Floyd-Warshall Algorithm use dynamic programming to search for shortest paths in graphs.
  • Combinatorial Problems: For example, in how many ways we can partition a set or climb stairs with different step sizes? These types of combinatorial tasks require dynamic programming.
  • Resource Allocation: In the resource allocation problem, which involves the distribution of resources optimally, dynamic programming is useful here.

Conclusion

By breaking them down into simpler subproblems, dynamic programming (DP) serves as a powerful technique that simplifies complex problems. By storing these solutions as they progress, DP ensures that each sub-problem is solved only once, improving efficiency throughout the entire process.

 

Dynamic Programming offers a systematic approach to finding optimised solutions whether you are solving optimization problems, doing string processing or pathfinding. As you practise and apply DP, an intuitive sense for identifying problems where they can be used efficiently will develop.

 

This guide has provided a solid foundation, from understanding basic concepts to implementing solutions in code. With continued practice, even the most challenging DP problems will no longer present themselves as difficult.

FAQs
Dynamic Programming is an approach used in solving problems by breaking them down into smaller subproblems and storing their answers.
DP stores the results of subproblems so that recursion is optimised by avoiding repetitive calculations.
DP finds its application in optimization problems, string processing, path-finding, resource allocation, etc.
The Top-down approach uses memoization while the bottom-up one employs tabulation.
Greedy algorithms follow a local optimal choice that leads to a global optimum while on the other hand, DP provides more accurate/optimal solutions.
Problems such as the Fibonacci Sequence, Knapsack Problem or Longest Common Subsequence provide great practice opportunities.
Use DP when there are overlapping subproblems and optimal substructure.

Updated on October 17, 2024

Link
left dot patternright dot pattern

Programs tailored for your success

Popular

Management

Data Science

Finance

Technology

Future Tech

Upskill with expert articles

View all
Hero Vired logo
Hero Vired is a leading LearnTech company dedicated to offering cutting-edge programs in collaboration with top-tier global institutions. As part of the esteemed Hero Group, we are committed to revolutionizing the skill development landscape in India. Our programs, delivered by industry experts, are designed to empower professionals and students with the skills they need to thrive in today’s competitive job market.
Blogs
Reviews
Events
In the News
About Us
Contact us
Learning Hub
18003093939     ·     hello@herovired.com     ·    Whatsapp
Privacy policy and Terms of use

|

Sitemap

© 2024 Hero Vired. All rights reserved