Skip to main content

Recursion Trees Primer

Recursion trees.

Controlling the fundamentals stands at the cornerstone of controlling a topic.  In our case in order to be a good developer its not enough or even not at all important to control the latest Java/JavaScript/big data technology but what's really important is the basics.  And the basics in computer science are maths, stats, algorithms and computer structure.

Steve wosniak the co-founder of apple said the same, what gave him his relative advantage was his deep understanding of programming and computer structure, this is what gave him the ability to create computer's which are less costly than the competitors (not that there were many) and by the way there were 3 founders to apple company one responsible for the technical side, one for the product and sales (Steve Jobs) and the third responsible for the company structure and growth, each of the three extremely important, it was not only the two Steve's but that's a topic for another episode.

And with that let's get started with recursion trees.

In software engineering we solve problems, algorithms are the basis of problem solving, this is like the ground base theory for problem solving.

One of the topmost structures for solving problems is named "divide and conquer".

In divide and conquer you take one problem let's say you have n items and you divide them let's say to n/2 and then solve each sub problem, with the hope that each sub problem is smaller enough such that the cost would not be too high, and then of course you need to combine the solutions of the sub problems into a solution of the main problems.

When you do such a problem a question is raised, what is the running time of this problem solving technique, this technique is commonly used in recursion.

This is where recursion trees come into play.

Recursions trees are a pictorial and graphical representation of the divide and conquer technique for solving problems, they are great for getting an intuition into how long would it take for our algorithms to take to run, and the best thing is that you could actually draw a picture of a tree.

Why do you draw a tree? Because we are reducing the problem into subcomponents and solving each, so just like in a tree we have a root node, in our case the root node is our problem, the value of that root node would be the time it's going for the algorithm to run on that root node but as we delegated the problem into sub nodes it's going to be shorter than the total running time in most cases, however we don't know that yet.

Then we draw arrows from the root node into multiple children nodes, where each child is a sub problem and the value of each child is the time it's going to take for that algorithm to run for us to solve that child.

To be more specific our algorithm is recursive this is why we use recursion trees and this is how we are able to reduce our problem into sub problems with divide and conquer.

If we specify the time for the non-recursive part of the algorithm to be O(f(n)) or in other words linearly relational to number of elements we have in that node, and we have r recursive calls, and we divide on each level of the tree the problem into c problems than we have then on each sub node n/c items or O(f(n/c)) time to solve each such sub nodes the linear time that it takes.

Therefore, the total running time in a general way for our algorithm would be

T(n) = r*T(n/c) + f(n)

Where
T(n) is the total time that it takes us to solve that problem and r * T(n/c) is the total time it would take us to solve each of the sub problems and to get to T(n) we need also to apply the combine time which is f(n) to go from all the sub problems into the top problem.

When we solve a problem with recursion we have base cases right? So in our case when we have drawn the tree the leaves of that tree are the thing that represent the base case usually they take constant time to execute.  You can just assume that T(n) for the base cases - the leaves is Constant - 1.

The general structure for a tree with r sub problems for each level, when in each level you decrease the number of items from n into n/c in each level means that on the next level each node would have n/c^2 and then up to n/c^L where L is the4 level of the tree.

The level of the tree is Log_c_(n) so this brings us to the ground truth of the overall time complexity for the recursion tree in a general way which is

T(n) = Sigma(i=0, L) of r^i * f(n/c^i)

When we look at this formulation of the general time for a recursion tree to execute we need to remember we could have 3 distinct cases.

First case is like on where the number of items decreases exponentially on each level, in the case where every term is a Constant factor smaller than previous term.  In this case the decrease in time complexity is so drastic that the actual time to execute the n elements T(n) is just the combination O(f(n)).

The second case is the one that we know from standard sorting algorithms such as merge sort where all terms in each level are equal, in this case we have the time O(f(n)logn

In the third case we have more items and the series grows exponentially so it's actually the leaves that dominate the amount of time its going to take.

Let's consider an example where in quick sort we manage somehow to choose the pivot (this is the worst case of quick sort) where we choose the pivot to be the max number in this case we would end up in each level of the tree with n-1 items to solve and 1 item that is already solved which means we are going to have n levels to the tree, so T(n) would be T(n-1) + T(1) + O(n) combine time, this quickly becomes T(n) = O(n^2) because in each level we have the same O(n) and we have n such levels so its n^2.

To sum up in the recursion tree we draw a tree the first node the root node is like the T(n) and we then break the problem into sub problems which are the children of the tree then depending on the span of the tree and the rate at which it decreases we can determine even intuitively the running time of the algorithm at hand.





Comments

Popular posts from this blog

Functional Programming in Scala for Working Class OOP Java Programmers - Part 1

Introduction Have you ever been to a scala conf and told yourself "I have no idea what this guy talks about?" did you look nervously around and see all people smiling saying "yeah that's obvious " only to get you even more nervous? . If so this post is for you, otherwise just skip it, you already know fp in scala ;) This post is optimistic, although I'm going to say functional programming in scala is not easy, our target is to understand it, so bare with me. Let's face the truth functional programmin in scala is difficult if is difficult if you are just another working class programmer coming mainly from java background. If you came from haskell background then hell it's easy. If you come from heavy math background then hell yes it's easy. But if you are a standard working class java backend engineer with previous OOP design background then hell yeah it's difficult. Scala and Design Patterns An interesting point of view on scala, is...

Alternatives to Using UUIDs

  Alternatives to Using UUIDs UUIDs are valuable for several reasons: Global Uniqueness : UUIDs are designed to be globally unique across systems, ensuring that no two identifiers collide unintentionally. This property is crucial for distributed systems, databases, and scenarios where data needs to be uniquely identified regardless of location or time. Standardization : UUIDs adhere to well-defined formats (such as UUIDv4) and are widely supported by various programming languages and platforms. This consistency simplifies interoperability and data exchange. High Collision Resistance : The probability of generating duplicate UUIDs is extremely low due to the combination of timestamp, random bits, and other factors. This collision resistance is essential for avoiding data corruption. However, there are situations where UUIDs may not be the optimal choice: Length and Readability : UUIDs are lengthy (typically 36 characters in their canonical form) and may not be human-readable. In UR...

Bellman Ford Graph Algorithm

The Shortest path algorithms so you go to google maps and you want to find the shortest path from one city to another.  Two algorithms can help you, they both calculate the shortest distance from a source node into all other nodes, one node can handle negative weights with cycles and another cannot, Dijkstra cannot and bellman ford can. One is Dijkstra if you run the Dijkstra algorithm on this map its input would be a single source node and its output would be the path to all other vertices.  However, there is a caveat if Elon mask comes and with some magic creates a black hole loop which makes one of the edges negative weight then the Dijkstra algorithm would fail to give you the answer. This is where bellman Ford algorithm comes into place, it's like the Dijkstra algorithm only it knows to handle well negative weight in edges. Dijkstra has an issue handling negative weights and cycles Bellman's ford algorithm target is to find the shortest path from a single node in a graph ...