# Dynamic Programming-an Overall View of DP

Dynamic programming is an algorithmic technique that uses a “bottom-up” approach. Since it’s a powerful tool to solve optimization-related problems, it’s very popular. To learn more about dynamic programming algorithms, finish reading this article and you can find more information at __algo.monster__.

**Definite this algorithm**

**Definite this algorithm**

What is DP? A dynamic programming algorithm makes a list of similar, but simpler, subproblems. These problems are interdependent to each other, or similar. Then, it calculates the solution for the large, complex problem using the easier subproblems in the array that you stored. This algorithm tries to maximize profits by minimizing costs.

**Why is this technique using this name dynamic programming? **

**Why is this technique using this name dynamic programming?**

In the book, Eye of the Hurricane: An Autobiography (1984), Bellman reveals the reasons behind the term of dynamic programming.

Bellman chose the word dynamic to describe the time-varying nature of the problems. Apart from that, he thought it sounded amazing. “Programming” was for the process of finding the optimal program. This could be a military schedule or logistical schedule training. And this usage is identical to that of mathematical programming and linear programming. It is a synonym for mathematical optimizing.

**Features of the dynamic programming algorithm**

**Features of the dynamic programming algorithm**

Dynamic programming can only be applicable to problems that have the optimal substructure and sub-problems that overlap. When you solve a problem by constructing optimal solutions to non-overlapping problems, you call it “divide and conquer”. Merge sort or quick sort is dynamic programming problems because of that.

**Optimal substructure**

**Optimal substructure**

This means that you can find the optimal solution to an optimization problem by the combination of the best solutions to its sub-problems. These optimal substructures are using recursion.

Given a graph G=(V.E), the shortest route p from a vertex U to a vertex V exhibits optimal structure. Take any intermediate vertex W on this shortest pathway p. Then, if p is indeed the shortest path, it can be divided into sub-paths P1 from u to WW and p2 w to V so that these are, in turn, the shortest paths between the respective vertices (using the cut-and-paste argument described under Introduction to Algorithms). This allows one to easily find the shortest paths in a recursive fashion, such as the Bellman-Ford or Floyd-Warshall algorithms.

**Sub-problems that are overlapping**

**Sub-problems that are overlapping**

This means that sub-problems should be small. Recursive algorithms that solve problems must solve the same subproblems over and again, not generate new ones. Consider the following recursive formula for creating the Fibonacci series. Fi = Fi-1 + Fi-2 with base case F1 = Fi2 = 1. Next, F43 = F42+ F41 and F42 =F41 + F40. F41 is now being solved by the recursive sub-trees of F43 and F42. Although the number of sub-problems in the problem is small (only 43), it can be solved over and over again if you use a naive, recursive approach such as this. Dynamic programming solves each subproblem once without repetition.

**Approaches to DP**

**Approaches to DP**

There are two kinds of DP approaches.

**Top-down approach**

**Top-down approach**

The top-down method is the direct result of the recursive formulation of a problem. The solution to any problem are recursive by using its sub-problems. If they overlap, one can easily cache or save the results in a table. To determine if you have solved a sub-problem, we check the table first. If you have stored or cached the solution, reuse it. If not, solve the subproblem and put the outcome on the table for future use.

**Bottom-up approach**

**Bottom-up approach**

When the solution to a problem can be recursively described in terms of its subproblems, you can reformulate it in a bottom-up style. We should first solve the subproblems and then use the solutions to find results for larger sub-problems. You can do this in a tabular format by iteratively solving larger sub-problems using solutions from smaller sub-problems. Say, we know the F41 and F40 values, we can get the result of F42 without calculating.

**When can you use DP to solve a problem?**

**When can you use DP to solve a problem?**

Dynamic programming methods are suitable if sub-problems are in larger recursive problems. This allows you to solve sub-problems within larger problems. There is then a relationship between the larger problem’s value and the sub-problems’ values. We call this relationship the Bellman equation in optimization literature.

**How can you solve a problem using dynamic programming?**

**How can you solve a problem using dynamic programming?**

Three steps to finding a dynamic programming solution for a problem.

- Identify the subproblems.
- Create a recurrence, and it depends on solving each subproblem by dealing with simpler subproblems.
- Code an algorithm to calculate the recurrence relation.

**Two applications of dynamic programming in science**

**Two applications of dynamic programming in science**

Many applications use dynamic programming. Here are two examples.

**Mathematical optimization**

**Mathematical optimization**

Dynamic programming is a method of simplifying a decision through the use of a series of steps that repeat over time. People often use this in mathematical optimization.

You can do it by delineating a series of value functions V1, …, Vn, taking y as an argument. It represents the state of the system at a time i.e., n = 1, n- 2, …, 2. 1 can be found by working backward using the Bellman equation. For i=2, …,n, Vi-1 at any given state y can be computed by maximizing a simple function (often sum) that is the gain from a decision made at the time i- 1. This function is then multiplied to calculate Vi-1 at the new system state if the decision is made. The above operation returns Vi-1 since Vi has been already computed for the required states. The optimal solution is V1. You can find this value can at the initial state. You can trackback previous calculations to find the optimal values of each decision variable.

**Bioinformatics**

**Bioinformatics**

Bioinformatics uses dynamic programming for many tasks. These tasks usually include sequence alignment, protein folding, and prediction of RNA structure. In the 1970s, Charles DeLisi (USA) and Georgii Gurskii (USSR) independently developed dynamic programming algorithms for protein DNA binding. These algorithms are prevailing in bioinformatics, computational biology, and the study of nucleosome positioning.

**Conclusion**

**Conclusion**

Dynamic programming is a useful algorithm that offers an optimal solution for the overall picture. It simplifies complex problems by dividing them into simpler ones. That’s we use DP in various fields.