Popular

Data Science

Technology

Finance

Management

Future Tech

Basics of SQL

12 Hrs. duration

12 Modules

2600+ Learners

Start Learning

Dynamic Programming (DP) is a method of solving complex problems by dividing them into more manageable subproblems. Beginners may find it difficult, but once the basic idea is known, it can completely transform the process of problem-solving. This blog strives to help you make Dynamic Programming understandable by laying out the basics and teaching you how to think in a way that would make these problems intuitive to solve.

In this blog post, we will explain what Dynamic Programming is all about and when one should use it. Besides, we are going to discuss different methods along with their comparisons against recursion or greedy algorithms. Also, examples will be given as well as practice exercises provided at the end for better understanding.

Dynamic Programming (DP) is a method in computer science and mathematics designed to solve problems by breaking them down into smaller, simpler subproblems. The main point here is that each subproblem needs to be solved only once before its solution can be stored somewhere. Therefore as soon as this particular subproblem comes up again when solving the next main problem next, then one doesn’t need to recompute it afresh. Rather, get its value from storage, thus cutting down on repeating calculations which consume much time. It optimises solutions making them more efficient and faster.

Dynamic programming works well when the same subproblem occurs multiple times in different parts of a large problem called overlapping subproblems. Instead of solving every small problem many times over again through sequential computation, we solve all of them once at the initial stage and then store their values in some form such as a table or some other data structure. It helps us to get any other answer needed anytime later on without having to recalculate it again from scratch, hence reducing both the time complexity and overall complexity of our algorithm.

There are situations where a problem has stages which are divided into different possible states, whereby each state has several alternatives. The solution is then made up of the combinations of these subproblems in a systematic manner. Dynamic Programming is frequently used for solving complex problems, including shortest path on graphs, optimising resources and puzzles.

Dynamic programming (DP) does not apply to all kinds of problems. It works best when applied to particular scenarios where certain characteristics are exhibited by problems. To help in identifying situations where DP should be used, explore these characteristics and illustrate with examples as below.

Subproblems overlap happen when some part of any task recurs in more than another subtask within the same question. Instead of solving the same subproblem repeatedly, DP allows us to solve it once, store the solution, and reuse it whenever needed. This characteristic is key to the efficiency of DP.

**Example:**For instance let us take the Fibonacci Sequence where the nth number is generated by adding the previous two numbers together i.e. f(n) = f(n-1) + f(n-2). If we write simple recursive code to calculate Nth Fibonacci then at every call computing fib(N-1), fib(N-2) and so on up to 0th term gets calculated multiple times which results in an exponential time complexity. If we use the memoization technique for storing already found values, it can bring down order growth rate from O(2^n) to O(N).

A problem has an optimal substructure if an optimal solution to the problem can be constructed efficiently from optimal solutions of its subproblems. In other words, solving the subproblems optimally guarantees that the overall problem will also be solved optimally.

DP is useful for solving problems that can be solved using recursive formulas. These types of problems usually involve making successive decisions where each step relies on outcomes from smaller subproblems.

**Example:**The Knapsack problem is a classic example. The problem involves choosing items to maximise total value without exceeding the weight limit. By breaking the problem down into smaller decisions whether to include a particular item or not, you can use DP to store the best solution for each weight limit. It gradually builds up the optimal solution for the entire problem.

Dynamic Programming (DP) encompasses breaking down a problem into smaller subproblems that are manageable, solving them once and storing their solutions. In this way, we avoid redoing similar calculations, thus enhancing the efficiency of the algorithm. Let us take DP step by step:

The first step towards employing DP is identifying an original problem’s subproblems. Sub-problems should be scaled-down versions of the actual problem at hand. For instance, in the Fibonacci sequence, the previous Fibonacci numbers’ calculations act as its Sub-Problems. To find the 10th Fibonacci number, we must calculate the 9th and 8th Fibonacci numbers before.

Two main approaches exist when it comes to implementing DP. These are Top-Down and Bottom-Up.

**a. Top-Down(Memoization): **In this approach, we start solving the problem from the top (main problem), working our way down up to the smallest sub-problems. Our table(or memo) stores computed results of sub-problems. If ever such a case arises again just retrieve the result stored within, hence no need for re-computation.

**Example:**Suppose you’re trying to find fib(5) through recursion. To do this, you have to calculate fib(4) and fib(3). You won’t have to compute fib(4) and fib(3) twice if you store their results for future needs during the calculation of fib(6).

**b. Bottom-Up(Tabulation): **This approach however takes the opposite direction because we initially address the smallest sub-problems whose solutions are employed in resolving greater ones. This method involves filling out a table in a systematic way until the solution to the main problem is found.

After solving a sub-problem we save its solution in some sort of data structure such as an array, matrix or hash table. Since no need to recalculate the same results again, it increases the efficiency of the algorithm.

Lastly, after having solved all subproblems and kept their solutions, one can apply them towards building up a final response to the primary problem. The combination of these stored outcomes helps us come up with an efficient way of finding out what exactly answers our primary problem.

Let’s consider the problem of finding the nth Fibonacci number. The Fibonacci sequence is a series where each number is the sum of the two preceding ones, starting from 0 and 1. The problem statement can be defined as:

**Problem Statement: **Given an integer n, find the nth Fibonacci number. The Fibonacci sequence is defined as:

- Fib(0) = 0
- Fib(1) = 1
- Fib(n) = Fib(n-1) + Fib(n-2) for n > 1

Now, let’s explore how to approach this problem, starting with the naive recursive solution and then gradually improving it with Dynamic Programming.

The most straightforward way to solve the Fibonacci problem is by using a recursive function that directly follows the definition of the sequence. However, this approach is not efficient. Let’s walk through it to understand why.

**Brute Force Algorithm**

**Base Cases:**If n is 0, return 0. If n is 1, return 1.**Recursive Case:**If n is greater than 1, recursively calculate Fib(n-1) and Fib(n-2) and return their sum.

**Python Code (Brute Force)**

For Fib(5), the function will call fib(4) and fib(3). To compute fib(4), it calls fib(3) and fib(2). Notice that fib(3) is computed twice, and similarly, other values are recalculated multiple times. This leads to an exponential time complexity of O(2^n).

In the brute force approach, the same subproblems (like Fib(3) and Fib(2)) are solved multiple times. This redundancy makes the solution inefficient. To optimise this, we need to store the results of subproblems as we compute them. This is where Dynamic Programming comes into play.

Before implementing a DP solution, ask yourself:

**What are the subproblems?**In this case, Fib(n) depends on Fib(n-1) and Fib(n-2).**Are these subproblems overlapping?**Yes, because the same Fibonacci numbers are computed multiple times.**Can I store the results of these subproblems to avoid redundant work?**Yes, and this is exactly what we’ll do in the DP approach.

The Top-Down approach starts from the original problem and breaks it down into smaller subproblems, storing the results of these subproblems as you go.

**Step-by-Step Process**

**Base Cases:**If n is 0 or 1, return n.**Check for Stored Results:**Before calculating Fib(n), check if it’s already been computed and stored.**Recursive Calculation:**If not already computed, calculate Fib(n-1) and Fib(n-2), store the results, and then return their sum.

**Python Code (Top-Down with Memoization)**

**Example Input Walkthrough**

- Start with fib_memo(5).
- Calculate fib_memo(4) and fib_memo(3).
- Continue breaking down until you reach fib_memo(1) and fib_memo(0), which are base cases.
- Store the results as you return from the recursive calls.
- Use the stored results to compute the higher values efficiently.

**Analysis**

**Time Complexity:**O(n) – Each subproblem is solved once.**Space Complexity:**O(n) – Due to the memoization storage and the recursion stack.

The Bottom-Up approach works by solving the smallest subproblems first and building up the solution for the original problem.

**Step-by-Step Process**

**Initialise the Table:**Start with an array where fib[0] = 0 and fib[1] = 1.**Iterate to Fill the Table:**For each subsequent Fibonacci number, use the previous two numbers to calculate it.**Return the Final Result:**The last entry in the table will be Fib(n).

**Python Code (Bottom-Up with Tabulation)**

**Example Input Walkthrough**

- Start by initialising fib_table[0] = 0 and fib_table[1] = 1.
- Calculate fib_table[2] by adding fib_table[1] and fib_table[0].
- Continue this process until you fill up fib_table[n].

**Analysis**

**Time Complexity:**O(n) – The loop runs n-1 times.**Space Complexity:**O(n) – Space is needed to store the Fibonacci numbers.

Internship Assurance

DevOps & Cloud Engineering

Aspect |
Recursion |
Dynamic Programming |

Approach | Breaks down a problem into subproblems, but often solves the same subproblems multiple times. | Break down a problem into subproblems and store their solutions to avoid redundant work. |

Time Complexity | Often exponential (e.g., O(2^n) for Fibonacci). | Usually polynomials (e.g., O(n) for Fibonacci with DP). |

Space Complexity | Depends on the recursion depth (O(n) for the recursion stack). | Depends on the storage used for memoization or tabulation (O(n) for storing results). |

Solution Reuse | Do not reuse the solutions of subproblems. | Reuses the solutions of subproblems to optimise the process. |

Overlapping Subproblems | Leads to redundant calculations. | Avoids redundant calculations by storing and reusing subproblem results. |

Use Cases | Suitable for problems without overlapping subproblems or when the problem size is small. | Suitable for problems with overlapping subproblems and optimal substructure. |

Aspect |
Tabulation (Bottom-Up) |
Memoization (Top-Down) |

Approach | Starts from the smallest subproblems and works up to the main problem. | Starts with the main problem and breaks it down into smaller subproblems, storing the results as it goes. |

Direction | Iterative, usually uses loops. | Recursive, uses function calls. |

Order of Sub-Problem Solving | Solves all smaller subproblems first. | Solves subproblems as needed, caching results for reuse. |

Space Complexity | Requires space for a table to store solutions of all subproblems (O(n)). | Requires space for storing solutions and the recursion stack (O(n)). |

Implementation Complexity | Can be simpler to implement as it avoids recursion. | Can be more intuitive for those comfortable with recursion, but may involve managing recursion stack depth. |

Performance | Slightly faster in practice due to a lack of function call overhead. | Can be easier to implement but might have more overhead due to recursive calls. |

Aspect |
Greedy Approach |
Dynamic Programming |

Approach | Make the locally optimal choice at each step, hoping to find a global optimum. | Solves problems by breaking them down into subproblems and combining their solutions to find the global optimum. |

Optimal Solution | Does not always guarantee a globally optimal solution. | Guarantees a globally optimal solution if the problem has an optimal substructure. |

Use Cases | Suitable for problems where local optimal choices lead to a global optimum (e.g., Dijkstra’s algorithm). | Suitable for problems with overlapping subproblems and an optimal substructure (e.g., Knapsack problem). |

Time Complexity | Typically faster, often O(n) or O(n log n), since it doesn’t explore all possible solutions. | Typically slower, often O(n^2) or higher, as it explores many subproblems and combines their solutions. |

Problem Types | Works well for optimization problems where local decisions lead to a global solution. | Works well for complex problems where multiple solutions need to be combined to form the optimal solution. |

Examples | Coin Change problem (finding the minimum number of coins for a given amount using a greedy approach may not always yield the correct answer). | Coin Change problem (finding the minimum number of coins using DP guarantees the correct answer). |

Dynamic programming (DP) provides a range of advantages, especially if we are dealing with problems containing overlapping subproblems and an optimal structure. Below are the main advantages:

**Efficiency:**DP has a significant impact on time complexity by avoiding unnecessary calculations and thus is suited for problems whose time complexity would be exponential.**Optimal Solutions:**DP guarantees that the resultant solutions are optimised by solving sub-problems systematically and then combining their results.**Versatility:**DP can be used to solve different types of problems ranging from optimization to combinatorial ones, hence making it powerful in designing algorithms.

There are various areas where dynamic programming (DP) is applied to solve intricate problems efficiently. These include:

**Optimization Problems:**When maximising or minimising a value under specific constraints, such as the Knapsack problem, dynamic programming plays a crucial role.**String Processing:**For instance, Longest Common Subsequence (LCS) and Edit Distance use DP to compare strings quickly and process them.**Pathfinding Algorithms:**Dijkstra’s Algorithm and Floyd-Warshall Algorithm use dynamic programming to search for shortest paths in graphs.**Combinatorial Problems:**For example, in how many ways we can partition a set or climb stairs with different step sizes? These types of combinatorial tasks require dynamic programming.**Resource Allocation:**In the resource allocation problem, which involves the distribution of resources optimally, dynamic programming is useful here.

By breaking them down into simpler subproblems, dynamic programming (DP) serves as a powerful technique that simplifies complex problems. By storing these solutions as they progress, DP ensures that each sub-problem is solved only once, improving efficiency throughout the entire process.

Dynamic Programming offers a systematic approach to finding optimised solutions whether you are solving optimization problems, doing string processing or pathfinding. As you practise and apply DP, an intuitive sense for identifying problems where they can be used efficiently will develop.

This guide has provided a solid foundation, from understanding basic concepts to implementing solutions in code. With continued practice, even the most challenging DP problems will no longer present themselves as difficult.

FAQs

What is Dynamic Programming?

Dynamic Programming is an approach used in solving problems by breaking them down into smaller subproblems and storing their answers.

How is Dynamic Programming different from Recursion?

DP stores the results of subproblems so that recursion is optimised by avoiding repetitive calculations.

What are common applications of Dynamic Programming?

DP finds its application in optimization problems, string processing, path-finding, resource allocation, etc.

What is the difference between Memoization and Tabulation?

The Top-down approach uses memoization while the bottom-up one employs tabulation.

Can Greedy Algorithms be used instead of DP?

Greedy algorithms follow a local optimal choice that leads to a global optimum while on the other hand, DP provides more accurate/optimal solutions.

What are some good problems to practise Dynamic Programming?

Problems such as the Fibonacci Sequence, Knapsack Problem or Longest Common Subsequence provide great practice opportunities.

When should I use Dynamic Programming?

Use DP when there are overlapping subproblems and optimal substructure.

The DevOps Playbook

Simplify deployment with Docker containers.

Streamline development with modern practices.

Enhance efficiency with automated workflows.

Popular

Data Science

Technology

Finance

Management

Future Tech

Upskill with expert articles

View all

Hero Vired is a leading LearnTech company dedicated to offering cutting-edge programs in collaboration with top-tier global institutions. As part of the esteemed Hero Group, we are committed to revolutionizing the skill development landscape in India. Our programs, delivered by industry experts, are designed to empower professionals and students with the skills they need to thrive in today’s competitive job market.

Accelerator Program in Business Analytics & Data Science

Integrated Program in Data Science, AI and ML

Certificate Program in Full Stack Development with Specialization for Web and Mobile

Certificate Program in DevOps and Cloud Engineering

Certificate Program in Application Development

Certificate Program in Cybersecurity Essentials & Risk Assessment

Integrated Program in Finance and Financial Technologies

Certificate Program in Financial Analysis, Valuation and Risk Management

Privacy policy and Terms of use

© 2024 Hero Vired. All rights reserved