Skip to main content

Recursion Trees Primer

Recursion trees.

Controlling the fundamentals stands at the cornerstone of controlling a topic.  In our case in order to be a good developer its not enough or even not at all important to control the latest Java/JavaScript/big data technology but what's really important is the basics.  And the basics in computer science are maths, stats, algorithms and computer structure.

Steve wosniak the co-founder of apple said the same, what gave him his relative advantage was his deep understanding of programming and computer structure, this is what gave him the ability to create computer's which are less costly than the competitors (not that there were many) and by the way there were 3 founders to apple company one responsible for the technical side, one for the product and sales (Steve Jobs) and the third responsible for the company structure and growth, each of the three extremely important, it was not only the two Steve's but that's a topic for another episode.

And with that let's get started with recursion trees.

In software engineering we solve problems, algorithms are the basis of problem solving, this is like the ground base theory for problem solving.

One of the topmost structures for solving problems is named "divide and conquer".

In divide and conquer you take one problem let's say you have n items and you divide them let's say to n/2 and then solve each sub problem, with the hope that each sub problem is smaller enough such that the cost would not be too high, and then of course you need to combine the solutions of the sub problems into a solution of the main problems.

When you do such a problem a question is raised, what is the running time of this problem solving technique, this technique is commonly used in recursion.

This is where recursion trees come into play.

Recursions trees are a pictorial and graphical representation of the divide and conquer technique for solving problems, they are great for getting an intuition into how long would it take for our algorithms to take to run, and the best thing is that you could actually draw a picture of a tree.

Why do you draw a tree? Because we are reducing the problem into subcomponents and solving each, so just like in a tree we have a root node, in our case the root node is our problem, the value of that root node would be the time it's going for the algorithm to run on that root node but as we delegated the problem into sub nodes it's going to be shorter than the total running time in most cases, however we don't know that yet.

Then we draw arrows from the root node into multiple children nodes, where each child is a sub problem and the value of each child is the time it's going to take for that algorithm to run for us to solve that child.

To be more specific our algorithm is recursive this is why we use recursion trees and this is how we are able to reduce our problem into sub problems with divide and conquer.

If we specify the time for the non-recursive part of the algorithm to be O(f(n)) or in other words linearly relational to number of elements we have in that node, and we have r recursive calls, and we divide on each level of the tree the problem into c problems than we have then on each sub node n/c items or O(f(n/c)) time to solve each such sub nodes the linear time that it takes.

Therefore, the total running time in a general way for our algorithm would be

T(n) = r*T(n/c) + f(n)

Where
T(n) is the total time that it takes us to solve that problem and r * T(n/c) is the total time it would take us to solve each of the sub problems and to get to T(n) we need also to apply the combine time which is f(n) to go from all the sub problems into the top problem.

When we solve a problem with recursion we have base cases right? So in our case when we have drawn the tree the leaves of that tree are the thing that represent the base case usually they take constant time to execute.  You can just assume that T(n) for the base cases - the leaves is Constant - 1.

The general structure for a tree with r sub problems for each level, when in each level you decrease the number of items from n into n/c in each level means that on the next level each node would have n/c^2 and then up to n/c^L where L is the4 level of the tree.

The level of the tree is Log_c_(n) so this brings us to the ground truth of the overall time complexity for the recursion tree in a general way which is

T(n) = Sigma(i=0, L) of r^i * f(n/c^i)

When we look at this formulation of the general time for a recursion tree to execute we need to remember we could have 3 distinct cases.

First case is like on where the number of items decreases exponentially on each level, in the case where every term is a Constant factor smaller than previous term.  In this case the decrease in time complexity is so drastic that the actual time to execute the n elements T(n) is just the combination O(f(n)).

The second case is the one that we know from standard sorting algorithms such as merge sort where all terms in each level are equal, in this case we have the time O(f(n)logn

In the third case we have more items and the series grows exponentially so it's actually the leaves that dominate the amount of time its going to take.

Let's consider an example where in quick sort we manage somehow to choose the pivot (this is the worst case of quick sort) where we choose the pivot to be the max number in this case we would end up in each level of the tree with n-1 items to solve and 1 item that is already solved which means we are going to have n levels to the tree, so T(n) would be T(n-1) + T(1) + O(n) combine time, this quickly becomes T(n) = O(n^2) because in each level we have the same O(n) and we have n such levels so its n^2.

To sum up in the recursion tree we draw a tree the first node the root node is like the T(n) and we then break the problem into sub problems which are the children of the tree then depending on the span of the tree and the rate at which it decreases we can determine even intuitively the running time of the algorithm at hand.





Comments

Popular posts from this blog

Functional Programming in Scala for Working Class OOP Java Programmers - Part 1

Introduction Have you ever been to a scala conf and told yourself "I have no idea what this guy talks about?" did you look nervously around and see all people smiling saying "yeah that's obvious " only to get you even more nervous? . If so this post is for you, otherwise just skip it, you already know fp in scala ;) This post is optimistic, although I'm going to say functional programming in scala is not easy, our target is to understand it, so bare with me. Let's face the truth functional programmin in scala is difficult if is difficult if you are just another working class programmer coming mainly from java background. If you came from haskell background then hell it's easy. If you come from heavy math background then hell yes it's easy. But if you are a standard working class java backend engineer with previous OOP design background then hell yeah it's difficult. Scala and Design Patterns An interesting point of view on scala, is

Alternatives to Using UUIDs

  Alternatives to Using UUIDs UUIDs are valuable for several reasons: Global Uniqueness : UUIDs are designed to be globally unique across systems, ensuring that no two identifiers collide unintentionally. This property is crucial for distributed systems, databases, and scenarios where data needs to be uniquely identified regardless of location or time. Standardization : UUIDs adhere to well-defined formats (such as UUIDv4) and are widely supported by various programming languages and platforms. This consistency simplifies interoperability and data exchange. High Collision Resistance : The probability of generating duplicate UUIDs is extremely low due to the combination of timestamp, random bits, and other factors. This collision resistance is essential for avoiding data corruption. However, there are situations where UUIDs may not be the optimal choice: Length and Readability : UUIDs are lengthy (typically 36 characters in their canonical form) and may not be human-readable. In URLs,

Dev OnCall Patterns

Introduction Being On-Call is not easy. So does writing software. Being On-Call is not just a magic solution, anyone who has been On-Call can tell you that, it's a stressful, you could be woken up at the middle of the night, and be undress stress, there are way's to mitigate that. White having software developers as On-Calls has its benefits, in order to preserve the benefits you should take special measurements in order to mitigate the stress and lack of sleep missing work-life balance that comes along with it. Many software developers can tell you that even if they were not being contacted the thought of being available 24/7 had its toll on them. But on the contrary a software developer who is an On-Call's gains many insights into troubleshooting, responsibility and deeper understanding of the code that he and his peers wrote. Being an On-Call all has become a natural part of software development. Please note I do not call software development software engineering b