You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: dp/README.md
+67-6Lines changed: 67 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Dynamic Programming
2
2
3
-
Dynamic Programming (DP) and [divide-and-conquer](../dnc) share a common strategy of breaking down a problem into smaller sub-problems. However, DP achieves superior algorithmic performance by solving each sub-problem only once and preemptively eliminating sub-problems that cannot contribute to optimal solutions. This approach enables DP to avoid the redundancy inherent in DNC and produce more efficient solutions.
3
+
Dynamic Programming (DP) and [divide-and-conquer (DNC)](../dnc) share a common strategy of breaking down a problem into smaller sub-problems. However, DP works by solving each sub-problem only once and preemptively eliminating sub-problems that cannot contribute to optimal solutions. This approach makes DP more efficient than DNC where same sub-problems may be solved multiple times.
4
4
5
5
## Implementation
6
6
@@ -11,21 +11,82 @@ Dynamic Programming (DP) approach to algorithms typically involves four steps:
11
11
3. Compute the value of the optimal solution in a bottom-up manner.
12
12
4. Determine the optimal solution using the values computed in the previous steps.
13
13
14
-
If only the solution value is needed, step 4 may be omitted. Conversely, when the solution is required, it is advisable to consider the necessary values at the final step. This will facilitate storing pertinent information at step 3 and simplify step 4.
14
+
If only the solution value is needed, step 4 may be omitted. Conversely, when the solution itself only is required, it is advisable to consider the necessary values at the final step. This will facilitate storing pertinent information at step 3 and simplify step 4.
15
15
16
16
There are two general approaches for writing DP algorithms: top-down and bottom-up.
17
17
18
-
In the top-down approach, a caching mechanism stores each sub-problem solution and prevents redundancy. This technique is also known as memoization.
18
+
### Top-Down
19
19
20
-
In the bottom-up approach, sub-problems are solved in order of size, with the smaller ones tackled first. Since all subsequent smaller sub-problems are already solved when addressing a particular problem, the result is calculated and stored. The final result is calculated easily since all subsequent smaller sub-problems are already solved when addressing a particular problem.
20
+
The top-down approach starts with the final solution and breaks it down into smaller sub-problems. The final solution is found by calculating smaller sub-problems. A caching mechanism stores each sub-problem solution and prevents unnecessary re-calculations. This technique is also known as memoization.
21
+
22
+
The performance of Fibonacci solution we introduced in [recursion](../recursion/README.md) can be significantly improved by memoization.
23
+
24
+
```Go
25
+
package main
26
+
27
+
import (
28
+
"fmt"
29
+
)
30
+
31
+
varfib = make(map[int]int)
32
+
33
+
funcmain() {
34
+
fori:=1; i <= 10; i++ {
35
+
fmt.Println(fibonacci(i))
36
+
}
37
+
}
38
+
39
+
funcfibonacci(nint) int {
40
+
if n <= 1 {
41
+
return n
42
+
}
43
+
ifresult, ok:= fib[n]; ok {
44
+
return result
45
+
}
46
+
returnfibonacci(n-1) + fibonacci(n-2)
47
+
}
48
+
```
49
+
50
+
### Bottom-Up
51
+
52
+
The bottom-up approach builds the solution iteratively from smaller sub-problems towards the final solution.
53
+
54
+
```Go
55
+
package main
56
+
57
+
import (
58
+
"fmt"
59
+
)
60
+
61
+
funcfibonacci(nint) int {
62
+
varfib = map[int]int{
63
+
0: 0,
64
+
1: 1,
65
+
}
66
+
67
+
fori:=2; i <= n; i++ {
68
+
fib[i] = fib[i-1] + fib[i-2]
69
+
}
70
+
71
+
return fib[n]
72
+
}
73
+
74
+
funcmain() {
75
+
fori:=1; i <= 10; i++ {
76
+
fmt.Println(fibonacci(i))
77
+
}
78
+
}
79
+
```
21
80
22
81
## Complexity
23
82
24
-
Top-down and bottom-up approaches have a similar time complexity.
83
+
The complexity of DP algorithms depends on the size and structure of the sub-problems. The solution "House Robber" problem in rehearsal section is an example of a DP bottom-up solution with O(n) complexity.
84
+
85
+
A top-down and bottom-up solutions to the same problem usually have the same time complexity. For example both the top-down and bottom-up solutions to the Fibonacci problem have the time complexity of O(n).
25
86
26
87
## Application
27
88
28
-
DP is well-suited for tackling complex problems, including logistics, game theory, machine learning, resource allocation, and investment policies. In graph theory, DP is commonly used to determine the shortest path between two points. DP algorithms are particularly effective at optimizing problems with many potential solutions, where the goal is to identify the most optimal one.
89
+
DP is well-suited for tackling complex problems, including logistics, game theory, machine learning, science, resource allocation, and investment policies. In [graph](../graph/) theory, DP is commonly used to determine the shortest path between two points. DP algorithms are particularly effective at optimizing problems with many decisions that can be made throughout multiple stages, where the goal is to identify the most optimal set of decisions according to a criteria.
0 commit comments