Dynamic programming is a concept developed by Richard Bellman, a mathematician, and economist.

At the time, Bellman was looking for a way to solve complex optimization problems. Optimization problems require you to pick the best solution from a set of options.

An example of an optimization problem is the Traveling salesman problem. The goal is to find the shortest route to allow the salesman to visit each city exactly once and return to the starting city.

Bellman’s approach to these problems was to break them into smaller sub-problems and solve the sub-problems from the smallest to the largest. He then stored the results of the sub-problems and reused them to solve larger sub-problems. This is the main idea behind dynamic programming.

What is Dynamic Programming?

<img alt="YouTube video" data-pin-nopin="true" data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/hqdefault.jpg63b3de964b352.jpg" height="360" src="data:image/svg xml,” width=”480″>

Dynamic programming solves optimization problems by breaking them down into smaller sub-problems, solving each sub-problem once, and storing their solutions so that they can be reused and combined to solve the larger problem. The problems are solved from the smallest to the largest, allowing solutions to be reused.

How Dynamic Programming Works?

Solving a problem using dynamic programming involves the following steps:

  1. Define the Sub-problems: A large problem is divided into small sub-problems.
  2. Solve the Sub-problems: This involves solving the identified sub-problem, which can be done using recursion or iteration.
  3. Store the Solutions: Solutions to sub-problems are stored so they can be reused.
  4. Construct the solution to the Original Problem: The solution to the large problem is constructed from the sub-problems that have already been calculated.

To see this in action, we calculate the 6th Fibonacci number, F(6), using this process.

First, define the sub-problems that need to be solved.

F(n) = F(n-1) F(n-2) for n > 1

Therefore: F(6) = F(5) F(4)

F(5) = F(4) F(3)

F(4) = F(3) F(2)

F(3) = F(2) F(1)

F(2) = F(1) F(0)

F(1) = 1

F(0) = 0

The second step involves solving each sub-problems using a recursive function or an iterative process. We solve the sub-problems from the smallest to the largest, reusing results from smaller sub-problems. This gives us the following:

F(0) = 0

F(1) = 1

F(2) = F(1) F(0) = 1 0 = 1

F(3) = F(2) F(1) = 1 1 = 2

F(4) = F(3) F(2) = 2 1 = 3

F(5) = F(4) F(3) = 3 2 = 5

F(6) = F(5) F(4) = 5 3 = 8

As we solve each of the sub-problems, we store the solutions in an array or table so that they can be reused in solving larger sub-problems like so:

n F(n)
0 0
1 1
2 1
3 2
4 3
5 5
6 8

Once all of the sub-problems have been solved, we use the solutions to construct the solution to the original problem.

In this case, the solution to the original problem is the 6th Fibonacci number, which is found by summing the results of F(5) and F(4), sub-problems identified from the largest problem. The result gives us 8.

Where and Why Dynamic Programming Is Used?

Dynamic programming is used in areas where we have problems that can be divided into smaller sub-problems, and their solutions are used to solve larger problems.

These areas include computer science, economics, mathematics, and engineering. In computer science, it is used to solve problems involving sequences, graphs, and integer values and in competitive programming.

In economics, it is used to solve optimization problems in finance, production, and resource allocation. In mathematics, dynamic programming is used in game theory, statistics, and probability, where it is used to solve optimization problems.

In engineering, it is used to solve problems in resource allocation, scheduling, manufacturing, communication, and control systems.

There are several advantages to using dynamic programming to solve optimization problems:

  1. Efficiency: Dynamic programming can be more efficient than other optimization algorithms as it avoids the recomputation of similar problems multiple times.
  2. Solving Large Problems: Dynamic programming is ideal for large optimization problems that would be infeasible to solve using other methods. This is because it breaks the problem into smaller problems reducing their complexity.
  3. Optimal Solutions: Dynamic programming algorithms can find the optimal solution to a problem if the sub-problems and objectives are defined correctly.
  4. Simplicity: Dynamic programming algorithms are simple to implement and understand, especially if the problem can be defined in a specific order.
  5. Extensibility: Dynamic programming algorithms can be easily extended to solve more complex problems by adding additional sub-problems and modifying the objectives of the problem.

When it comes to solving optimization problems, dynamic programming is a very useful tool to ensure efficiency in solutions.

Approaches Used in Dynamic Programming

<img alt="math" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/math-1500×766.jpg" data- decoding="async" height="766" src="data:image/svg xml,” width=”1500″>

In dynamic programming, two approaches are used to solve optimization problems. These are the top-down approach and the bottom-up approach.

Top-Down Approach

This approach is also known as memoization. Memoization is an optimization technique primarily used to make computer programs faster by storing the results of function calls in the cache and returning the cached results the next time they are needed rather than computing them again.

The top-down approach involves recursion and caching. Recursion involves a function calling itself with simpler versions of the problem as its argument. Recursion is used to break down the problem into smaller sub-problems and solve the sub-problems.

Once a sub-problem is solved, its result is cached and reused whenever a similar problem occurs. The top-down is easy to understand and implement and only solves a sub-problem once. A downside to it, however, is that it takes up a lot of memory because of recursion. This can lead to a stack overflow error.

Bottom-Up Approach

The bottom-up approach, also known as tabulation, does away with recursion, replacing it with iteration, thus avoiding stack overflow errors.

In this approach, a large problem is broken into smaller sub-problems, and the solutions for the sub-problems are used to solve the larger problem.

Smaller sub-problems are first solved from the largest to smallest, and their results are stored in a matrix, array, or table, hence the name tabulation.

The stored results solve larger problems that depend on the sub-problems. The result of the original problem is then found by solving the largest sub-problem using previously computed values.

This approach has the advantage of being memory and time efficient by doing away with recursion. 

Examples of Problems That Can Be Solved by Dynamic Programming

The following are some programming problems that can be solved using dynamic programming:

#1. Knapsack Problem

<img alt="Knapsack Problem" data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/image-20.png" decoding="async" height="400" src="data:image/svg xml,” width=”800″>
Source: Wikipedia

A knapsack is a bag made of canvas, nylon, or leather typically strapped on the back and used by soldiers and hikers to carry supplies.

In the knapsack problem, you’re presented with a knapsack, and given its carrying capacity, you are required to choose items, each with its value. Your selection should be such that you get the maximum total value of items picked and the weight of the items is less than or equal to the knapsack capacity.

An example of the knapsack problem is given below:

Imagine that you are going on a hiking trip and have a knapsack with a capacity of 15 kilograms. You have a list of items that you can bring with you, along with their values and weights, as shown in the table below:

Item Value Weight
Tent 200 3
Sleeping bag 150 2
Stove 50 1
Food 100 2
Water bottle 10 0.5
First aid kit 25 1

Choose a subset of the items to bring such that the total value of the items is maximized while the total weight is less than or equal to the knapsack capacity, which is 15 kilograms.

Real-world applications of the knapsack problem involve selecting securities to add to a portfolio to minimize risk and maximize profit and finding the least wasteful ways to cut raw materials.

#2. Scheduling Problem

<img alt="Scheduling-problem" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/Scheduling-problem.png" data- decoding="async" height="400" src="data:image/svg xml,” width=”800″>

A scheduling problem is an optimization problem in which the goal is to optimally assign tasks to a set of resources. The resources may be machines, personnel, or other resources used to complete the tasks.

An example of a scheduling problem is given below:

Imagine that you are a project manager responsible for scheduling a set of tasks that need to be completed by a team of employees. Each task has a start time, an end time, and a list of employees who are qualified to complete it. 

Here is a table that describes the tasks and their characteristics:

Task Start time End time Qualified employees
T1 9 11 A, B, C
T2 10 12 A, C
T3 11 13 B, C
T4 12 14 A, B

Assign each task to an employee to minimize total completion time.

The scheduling problem can be encountered in the manufacturing industry when trying to optimize the allocation of resources such as machines, materials, tools, and labour.

It can also be encountered in healthcare when optimizing the use of beds, personnel, and medical supplies. Other industries where this problem can occur are project management, supply chain management, and education.

#3. Travelling Salesman Problem

<img alt="Traveling salesman problem" data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/image-19.png" decoding="async" height="500" src="data:image/svg xml,” width=”800″>
Source: Wikipedia

This is one of the most studied optimization problems that can be solved using dynamic programming.

The travelling salesman problem provides a list of cities and the distances between each pair of cities. You are required to find the shortest possible route that visits each city exactly once and returns to the origin city.

An example of a travelling salesman problem is given below:

Imagine that you are a salesperson who needs to visit a set of cities in the shortest possible time. You have a list of the cities that you need to visit and the distances between each pair of cities, as shown in the table below:

City A B C D E
A 0 10 15 20 30
B 10 0 35 25 15
C 15 35 0 30 20
D 20 25 30 0 10
E 30 15 20 10 0

The travelling salesman problem can be encountered in the leisure industry when trying to plan routes for tourists, logistics when planning the shipping of goods, transport when planning bus routes, and in the sales industry, among others.

Clearly, dynamic programming has many real-world applications, which helps to learn more about it.

Consider the following resources to expound your knowledge of dynamic programming.

Resources

Dynamic Programming by Richard Bellman

Dynamic Programming is a book by Richard Bellman, who came up with dynamic programming and developed it in its early stages.

The book is written in an easy-to-understand way that only requires basic knowledge of mathematics and calculus to understand the text. In the book, Bellman introduces the mathematical theory of a multistage decision process which is key in dynamic programming.

The book then examines bottleneck problems in multistage production processes, existence and uniqueness theorems, and the optimal inventory equation.

The best thing about the book is that Bellman offers examples of many complex problems in fields such as logistics, scheduling theory, communication theory, mathematical economics, and control processes and shows how dynamic programming can solve the problems.

The book is available in Kindle, hardcover, and paperback versions.

Dynamic Programming Algorithms Master Course

<img alt="Dynamic Programming Algorithms Master Course" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/image-21-1500×425.png" data- decoding="async" src="data:image/svg xml,” width=”800″>

This Dynamic Programming Algorithms Master Course by Udemy is offered by Apaar Kamal, a software engineer at Google, and Prateek Narang, who also worked with Google.

The course is optimized to help learners excel in programming competition which features a lot of problems that require dynamic programming.

Aside from programming competitors, the course is ideal for programmers looking to improve their understanding of algorithms and people preparing for programming interviews and online coding rounds.

The course, which is over 40hrs long, covers dynamic programming in depth. The course first offers a refresher on concepts such as recursion and backtracking.

It then covers dynamic programming in game theory, strings, trees & graphs, matrix exponentiation, bitmasks, combinatorics & subsequences, partition problems, and multi-dimensional dynamic programming, among many other concepts.

Competitive Programming Essentials, Master Algorithms

<img alt="Competitive Programming Essentials, Master Algorithms" data- data-src="https://kirelos.com/wp-content/uploads/2023/01/echo/image-22.png" data- decoding="async" src="data:image/svg xml,” width=”800″>

Udemy offers a Competitive Programming Essentials Course by Prateek Narang and Amal Kamaar that covers dynamic programming, maths, number theory, and advanced data structures & algorithms in a manner that is useful and relevant to competitive programmers.

The course offers a refresher on data structures and algorithms before diving into more complex algorithms and techniques that come in handy in competitive programming.

The course covers dynamic programming, mathematics, game theory, pattern matching, Bitmasking, and a myriad of advanced algorithms used and tested in programming competitions.

The Udemy course is divided into 10 modules and 42 sections and provides lots of practice questions after each section. This bestseller course is a must-have for anyone interested in competitive programming.

Final Words

Dynamic programming is a beneficial skill for any programmer to learn to improve their problem-solving of real-world problems. Therefore, programmers should consider going through the suggested resources to add this crucial tool to their toolbox.

Next, you can check out programming languages to use in data science.