Dynamic programming
Previous: Algorithm
Dynamic programming is an mathematical optimization technique that solves complex problems by solving smaller subproblems once and reusing their solutions instead of recomputing.
There are two core properties that dynamic programming problems need to have:
- Optimal Substructure: The answers to subproblems can be used to construct the final answer
- Overlapping Subproblems: Same problem is solved multiple times (re-computing)
Solutions that can be found by breaking down into non-overlapping sub-problems are not considered dynamic programming but are instead divide and conquer.
This particular problem and most of others can be approached using the following sequence:
- Find recurrence relation
- Recursive (top-down): Convert recurrence relation into code
- Recursive + memo (top-down)
- Iterative + memo (bottom-up)
- Iterative + N variables (bottom-up)
The two approaches to dynamic programming problems:
- Top-Down (Memoization): The direct solution can be found via recursion while caching subproblem answers
cache = {}
def fibonacci(n):
if n < 2: return n
if n not in cache:
cache[n] = fibonacci(n-1) + fibonacci(n-2)
return cache[n]
print(fibonacci(5))
- Bottom-Up (Tabulation): This is when we already have the answers to smaller sub-problems and build the answer of the bigger question using them. For one dimensional dynamic problems, this will typically use an array, while for two dimensional problems it will use a grid.
def fib(self, n: int) -> int:
if n <= 1:
return n
tab = [0] * (n+1)
tab[1] = 1
for i in range(2, n+1):
tab[i] = tab[i-1] + tab[i-2]
return tab[n]
One particular optimization for fibonacci numbers uses only two variables:
def fibonacci(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a