Rekursion afmystificeret

For at forstå rekursion skal du først forstå rekursion.

Skør, er det ikke?

Nå, jeg håber, at i slutningen af ​​denne artikel vil du føle dig meget mere selvsikker om, hvad rekursion er og hovedsageligt, hvordan vi kan komme med en rekursiv løsning på et problem.

Hvad er rekursion?

Hvordan forklarer du rekursion til en 4-årig? Dette er et ret berømt interviewspørgsmål, og der er masser af svar tilgængelige på nettet. Vi vil ikke besvare dette spørgsmål, da det er for mainstream.

Hvis du er så klog som mig ??, ville du forklare rekursion til nogen et år yngre end dig. Bed dem om at forklare rekursion til nogen, der er et år yngre end dem. Fortsæt, indtil du har en 5-årig, der forklarer rekursion til en 4-årig. Færdig. [Kilde: reddit].

Med hensyn til programmering er rekursion

En funktion, der kalder sig selv.

Ovenstående funktion udfører ikke noget nyttigt arbejde som sådan, men det viser rekursion. Den rekursive relation ovenfor ville være

T(N) = T(N - 1) + O(1)

Dette betyder simpelthen, at udførelsen af ​​opkaldet ikke random_function(n)kan fortsætte, før opkaldet til random_function(n-1)er afsluttet osv.

I det væsentlige forsinker vi udførelsen af ​​den aktuelle tilstand af funktionen, indtil et andet opkald til den samme funktion har afsluttet og returneret resultatet.

Compileren fortsætter med at gemme tilstanden for funktionsopkaldet nu og fortsætter derefter til det næste funktionsopkald osv. Så gemmer compileren funktionstilstande på en stak og bruger den til beregninger og backtracking.

I det væsentlige, hvis et problem kan opdeles i lignende underproblemer, der kan løses individuelt, og hvis løsninger kan kombineres for at få den overordnede løsning, så siger vi, at der muligvis findes en rekursiv løsning på problemet.

I stedet for at holde fast ved denne tilsyneladende gamle definition af rekursion, vil vi se på en hel række anvendelser af rekursion. Så forhåbentlig vil tingene være klare.

Faktor for et tal

Lad os se, hvordan vi kan finde ud af, hvor stort et nummer er. Før det, lad os se, hvad et tal repræsenterer, og hvordan det beregnes.

factorial(N) = 1 * 2 * 3 * .... * N - 1 * N

Simpelthen sagt, det faktuelle af et tal er kun et produkt af termer fra 1 til tallet N ganget med hinanden.

Vi kan simpelthen have en forsløjfe fra 1 til N og multiplicere alle termerne iterativt, og vi vil have det faktiske antal af det givne tal.

Men hvis du ser nøje, eksisterer der en iboende rekursiv struktur til et tals faktor.

factorial(N) = N * factorial(N - 1)

Det er som at aflæse beregningen til et andet funktionsopkald, der fungerer på en mindre version af det oprindelige problem. Lad os se, hvordan dette forhold vil udfolde sig for at kontrollere, om løsningen her svarer til den, der leveres af forsløjfen.

Så det fremgår af de to figurer ovenfor, at den rekursive funktion, som vi definerede tidligere,

factorial(N) = N * factorial(N - 1)

er faktisk korrekt. Se på Python-kodestykket, der bruges til at finde en funktions rekursiv rekursivt.

Dette eksempel var ret simpelt. Lad os overveje et lidt større, men standardeksempel for at demonstrere begrebet rekursion.

Fibonacci-sekvens

Du skal allerede være fortrolig med den berømte Fibonacci-sekvens. For de af jer, der ikke har hørt om denne sekvens eller set et eksempel før, lad os se på det.

1 1 2 3 5 8 13 ..... 

Lad os se på formlen til beregning af n ^ th Fibonacci-nummer.

F(n) = F(n - 1) + F(n - 2)where F(1) = F(2) = 1

Det er klart, at denne definition af Fibonacci-sekvensen er rekursiv i naturen, da det n ^ th Fibonacci-nummer afhænger af de foregående to Fibonacci-tal. Dette betyder at opdele problemet i mindre delproblemer og dermed rekursion. Se koden til dette:

Hvert rekursivt problem skal have to nødvendige ting:

  1. Gentagelsesforholdet, der definerer problemets tilstand, og hvordan hovedproblemet kan opdeles i mindre delproblemer. Dette inkluderer også basissagen til stop af rekursion.
  2. Et rekursionstræ, der viser de første par, hvis ikke alle opkald til den pågældende funktion. Se på rekursionstræet for Fibonacci-sekvensernes rekursive forhold.

Rekursionstræet viser os, at resultaterne opnået ved behandling af de to undertræer af roden N kan bruges til at beregne resultatet for træet, der er rodfæstet ved N. Tilsvarende for andre noder.

Bladene på dette rekursionstræ er fibonacci(1)eller fibonacci(2)begge repræsenterer basissagerne for denne rekursion.

Nu hvor vi har en meget grundlæggende forståelse af rekursion, hvad en gentagelsesforhold er, og rekursionstræet, lad os gå videre til noget mere interessant.

Eksempler!

Jeg tror stærkt på at løse et antal eksempler på et givet emne inden for programmering for at blive en mester over dette emne. De to eksempler, vi overvejede (faktor for et tal og Fibonacci-sekvensen), havde veldefinerede gentagelsesrelationer. Lad os se på et par eksempler, hvor gentagelsesforholdet måske ikke er så indlysende.

Træets højde

For at holde tingene enkle for dette eksempel overvejer vi kun et binært træ. Så et binært træ er en treddatastruktur, hvor hver node højst har to børn. En knude på træet er betegnet som roden til træet, for eksempel:

Lad os definere, hvad vi mener med højden af ​​det binære træ.

Træets højde ville være længden af ​​den længste rod til bladstien i træet.

So, for the example diagram displayed above, considering that the node labelled as A as the root of the tree, the longest root to leaf path is A → C → E → G → I . Essentially, the height of this tree is 5 if we count the number of nodes and 4 if we just count the number of edges on the longest path.

Now, forget about the entire tree and just focus on the portions highlighted in the diagram below.

The above figure shows us that we can represent a tree in the form of its subtrees. Essentially, the structure to the left of node A and the structure to the right of A is also a binary tree in itself, just smaller and with different root nodes. But, they are binary trees nonetheless.

What information can we get from these two subtrees that would help us find the height of the main tree rooted at A ?

If we knew the height of the left subtree, say h1, and the height of the right subtree, say h2, then we can simply say that the maximum of the two + 1 for the node A would give us the height of our tree. Isn’t that right?

Formalizing this recursive relation,

height(root) = max(height(root.left), height(root.right)) + 1

So, that’s the recursive definition of the height of a binary tree. The focus is on binary here, because we used just two children of the node root represented by root.left and root.right. But, it is easy to extend this recursive relation to an n-ary tree. Let’s take a look at this in code.

The problem here was greatly simplified because we let recursion do all the heavy lifting for us. We simply used optimalanswers for our subproblems to find a solution to our original problem.

Lad os se på et andet eksempel, der kan løses på lignende linjer.

Antal noder i et træ

Her igen vil vi overveje et binært træ for enkelhed, men algoritmen og fremgangsmåden kan i det væsentlige udvides til enhver form for træ.

Problemet er i sig selv meget selvforklarende. I betragtning af roden til et binært træ er vi nødt til at bestemme det samlede antal noder i træet. Dette spørgsmål og den tilgang, vi vil komme med her, ligner meget den forrige. Vi skal bare lave små ændringer, og vi vil have antallet af noder i det binære træ.

Se på nedenstående diagram.

Diagrammet siger alt. Vi ved allerede, at et træ kan opdeles i mindre undertræer. Her igen kan vi spørge os selv,

Hvilken information kan vi få fra disse to undertræer, der kan hjælpe os med at finde antallet af noder i træet, der er rodfæstet ved A?

Well, if we knew the number of nodes in the left subtree and the number of nodes in the right subtree, we can simply add them up and add one for the root node and that would give us the total number of nodes.

Formalizing this we get,

number_of_nodes(root) = number_of_nodes(root.left) + number_of_nodes(right) + 1

If you look at this recursion and the previous one, you will find that they are extremely similar. The only thing that is varying is what we do with the information we obtained from our subproblems and how we combined them to get some answer.

Now that we have seen a couple of easy examples with a binary tree, let’s move onto something less trivial.

Merge Sort

Given an array of numbers like

4 2 8 9 1 5 2

we need to come up with a sorting technique that sorts them either in ascending or descending order. There are a lot of famous sorting techniques out there for this like Quick Sort, Heap Sort, Radix Sort and so on. But we are specifically going to look at a technique called the Merge Sort.

It’s possible that a lot of you are familiar with the Divide and Conquer paradigm, and this might feel redundant. But bear with me and read on!

The idea here is to break it down into subproblems.

That’s what the article is about right ? ?

What if we had two sorted halves of the original array. Can we use them somehow to sort the entire array?

That’s the main idea here. The task of sorting an array can be broken down into two smaller subtasks:

  • sorting two different halves of the array
  • then using those sorted halves to obtain the original sorted array

Now, the beauty about recursion is, you don’t need to worry about how we will get two sorted halves and what logic will go into that. Since this is recursion, the same method call to merge_sort would sort the two halves for us. All we need to do is focus on what we need to do once we have the sorted haves with us.

Let’s go through the code:

At this point, we trusted and relied on our good friend recursion and assumed that left_sorted_half and right_sorted_half would in fact contain the two sorted halves of the original array.

So, what next?

The question is how to combine them somehow to give the entire array.

The problem now simply boils down to merging two sorted arrays into one. This is a pretty standard problem and can be solved by what is known as the “two finger approach”.

Take a look at the pseudo code for better understanding.

let L and R be our two sorted halves. let ans be the combined, sorted array
l = 0 // The pointer for the left halfr = 0 // The pointer for the right halfa = 0 // The pointer for the array ans
while l < L.length and r < R.length { if L[l] < R[r] { ans[a] = L[l] l++ } else { ans[a] = R[r] r++ }}
copy remaining array portion of L or R, whichever was longer, into ans.

Here we have two pointers (fingers), and we position them at the start of the individual halves. We check which one is smaller (that is, which value pointed at by the finger is smaller), and we add that value to our sorted combined array. We then advance the respective pointer (finger) forward. In the end we copy the remaining portion of the longer array and add it to the back of the ans array.

So, the combined code for merge-sort is as follows:

We will do one final question using recursion and trust me, it’s a tough one and a pretty confusing one. But before moving onto that, I will iterate the steps I follow whenever I have to think of a recursive solution to a problem.

Steps to come up with a Recursive Solution

  1. Try and break down the problem into subproblems.

2. Once you have the subproblems figured out, think about what information from the call to the subproblems can you use to solve the task at hand. For example, the factorial of N — 1 to find the factorial of N , height of the left and right subtrees to find the height of the main tree, and so on.

3. Keep calm and trust recursion! Assume that your recursive calls to the subproblems will return the information you need in the most optimal fashion.

4. The final step in this process is actually using information we just got from the subproblems to find the solution to the main problem. Once you have that, you’re ready to code up your recursive solution.

Now that we have all the steps lined up, let’s move on to our final problem in this article. It’s called Sum of Distances in a Tree.

Sum of Distances in a Tree

Let’s look at what the question is asking us to do here. Consider the following tree.

In the example above, the sum of paths for the node A (the number of nodes on each path from A to every other vertex in the tree) is 9. The individual paths are mentioned in the diagram itself with their respective lengths.

Similarly, consider the sum of distances for the node C.

C --> A --> B (Length 2)C --> A (Length 1)C --> D (Length 1)C --> E (Length 1)C --> D --> F (Length 2)Sum of distances (C) = 2 + 1 + 1 + 1 + 2 = 7

This is known as the sum of distances as defined for just a single node A or C. We need to calculate these distances for each of the nodes in the tree.

Before actually solving this generic problem, let us consider a simplified version of the same problem. It says that we just need to calculate the sum of distances for a given node, but we will only consider the tree rooted at the given node for calculations.

So, for the node C, this simplified version of the problem would ask us to calculate:

C --> D (Length 1)C --> E (Length 1)C --> D --> F (Length 2)Simplified Sum of Distances (C) = 1 + 1 + 2 = 4

This is a much simpler problem to tackle recursively and would prove to be useful in solving the original problem.

Consider the following simple tree.

The nodes B and C are the children of the root (that is, A).

We are trying to see what information can we use from subproblems (the children) to compute the answer for the root A .

Note: here we simply want to calculate the sum of paths for a given node X to all its successors in its own subtree (the tree rooted at the node X).

There are no downwards going paths from the node B, and so the sum of paths is 0 for the node B in this tree. Let’s look at the node C . So this node has 3 different successors in F, D and E . The sum of distances are as follows:

C --> D (Path containing just 1 edge, hence sum of distances = 1)C --> D --> F (Path containing 2 edges, hence sum of distances = 2)C --> E (Path containing just 1 edge, hence sum of distances = 1)

The sum of all the paths from the node C to all of its decedents is 4, and number of such paths going down is 3.

Note the difference here. The sum_of_distances here counts the number of edges in each path — with each edge repeating multiple times, probably because of their occurrence on different paths — unlike number_of_paths , which counts, well, the number of paths ?.

If you look closely, you will realize that the number of paths going down is always going to be the number of nodes in the tree we are considering (except the root). So, for the tree rooted at C, we have 3 paths, one for the node D, one for E, and one for F. This means that the number of paths from a given node to the successor nodes is simply the total number of descendent nodes since this is a tree. So, no cycles or multiple edges.

Now, consider the node A. Let us look at all the new paths that are being introduced because of this node A. Forget the node B for now and just focus on the child node C corresponding to A. The new sets of paths that we have are:

A --> C (Path containing just 1 edge, hence sum of distances = 1)A --> (C --> D) (Path containing 2 edges, hence sum of distances = 2)A --> (C --> E) (Path containing 2 edges, hence sum of distances = 2)A --> (C --> D --> F) (Path containing 3 edges, hence sum of distances = 3)

Except for the first path A → C, all the others are the same as the ones for the node C, except that we have simply changed all of them and incorporated one extra node A.

If you look at the diagram above you will see a tuple of values next to each of the nodes A, B, and C.

(X, Y) where X is the number of paths originating at that node and going down to the decedents. Y is the sum of distances for the tree rooted at the given node. 

Since the node B doesn’t have any further children, the only path it is contributing to is the path A -->; B to A's tuple of (5, 9) above. So let’s talk about C.

C had three paths going to its successors. Those three paths (extended by one more node for A) also become three paths from A to its successors, among others.

N-Paths[A] = (N-Paths[C] + 1) + (N-Paths[B] + 1)

That is the exact relation we are looking for as far as the number of paths (= number of successor nodes in the tree) are concerned. The 1 is because of the new path from the root to it’s child, that is A -->; C in our case.

N-Paths[A] = 3 + 1 + 0 + 1 = 5

As far as the sum of distances is concerned, take a look at the diagram and the equations we just wrote. The following formula becomes very clear:

Sum-Dist[A] = (N-Paths[C] + 1 + Sum-Dist[C]) + (N-Paths[B] + 1 + Sum-Dist[B])
Sum-Dist[A] = (3 + 1 + 4 + 0 + 1 + 0) = 9

The main thing here is N-Paths[C] + Sum-Dist[C] . We sum these up because all of the paths from C to its descendants ultimately become the paths from A to its descendants — except that they originate at A and go through C, and so each of the path lengths are increased by 1. There are N-Paths[C] paths in all originating from C and their total length is given by Sum-Dist[C] .

Hence the tuple corresponding to A = (5, 9). The Python code for the algorithm we discussed above is as follows:

The Curious Case of the Visited Dictionary :/

If you look at the code above closely, you’ll see this:

# Prevents the recursion from going into a cycle. self.visited[vertex] = 1

The comment says that this visited dictionary is for preventing the recursion from entering a cycle.

If you’ve paid attention til now, you know that we are dealing with a tree here.

The definition of a tree data structure doesn’t allow cycles to exist. If a cycle exists in the structure, then it is no longer a tree, it becomes a graph. In a tree, there is exactly one path between any two pair of vertices. A cycle would mean there is more than one path between a pair of vertices. Look at the figures below.

The structure on the left is a tree. It has no cycles in it. There is a unique path between any two vertices.

The structure on the right is a graph, there exists a cycle in the graph and hence there are multiple paths between any pair of vertices. For this graph, it so happens that any pair of vertices have more than one path. This is not necessary for every graph.

Almost always, we are given the root node of the tree. We can use the root node to traverse the entire tree without having to worry about any cycles as such.

However, if you’ve read the problem statement clearly, it does not state anything about root of the tree.

That means that there is no designated root for the tree given in the question. This could mean that a given tree can be visualized and processed in so many different ways depending upon what we consider as the root. Have a look at multiple structures for the same tree but with different root nodes.

So many different interpretations and parent child relationships are possible for a given unrooted tree.

So, we start with the node 0 and do a DFS traversal of the given structure. In the process we fix the parent child relationships. Given the edges in the problem, we construct an undirected graph-like structure which we convert to the tree structure. Taking a look at the code should clear up some of your doubts:

Every node would have one parent. The root won’t have any parent, and the way this logic is, the node 0 would become the root of our tree. Note that we are not doing this process separately and then calculating the sum of distances downwards. Given a tree, we were trying to find, for every node, the simplified sum of distances for the tree rooted at that node.

So, the conversion from the graph to the tree happens in one single iteration along with finding out the sum of distances downwards for each and every node.

I posted the code again so that the visited dictionary makes much more sense now. So, one single recursion doing all that for us. Nice!

Bringing it all together

Now that we have our tree structure defined, and also the values of sum of distances going downward defined for us, we can use all of this information to solve the original problem of Sum of Distances in a Tree.

How do we do that? It’s best to explain this algorithm with the help of an example. So we will consider the tree below and we will dry run the algorithm for a single node. Let’s have a look at the tree we will be considering.

The node for which we want to find the sum of distances is 4. Now, if you remember the simpler problem we were trying to solve earlier, you know that we already have two values associated with each of the nodes:

  1. distances_down Which is the sum of distances for this node while only considering the tree beneath.
  2. number_of_paths_down which is the number of paths / nodes in the tree rooted at the node under consideration.

Let’s look at the annotated version of the above tree. The tree is annotated with tuples (distances_down, number_of_paths_down) .

Let’s call the value we want to compute for each node as sod which means sum of distances, which is what the question originally asks us to compute.

Let us assume that we have already computed the answer for the parent node of 4 in the diagram above. So, we now have the following information for the node labelled 2 (the parent node) available:

(sod, distances_down, number_of_paths_down) = (13, 4, 3)

Let’s rotate the given tree and visualize it in a way where 2 is the root of the tree essentially.

Now, we want to remove the contribution of the tree rooted at 4 from sod(2). Let us consider all of the paths from the parent node 2 to all other nodes except the ones in the tree rooted at 4 .

2 --> 5 (1 edge)2 --> 1 (1 edge)2 --> 1 -->7 (2 edges)2 --> 1 --> 7 --> 9 (3 edges)2 --> 1 --> 7 --> 10 (3 edges)
Number of nodes considered = 6Sum of paths remaining i.e. sod(2) rem = 1 + 1 + 2 + 3 + 3 = 10

Let’s see how we can use the values we already have calculated to get these updated values.

* N = 8 (Total number of nodes in the tree. This will remain the same for every node. )* sod(2) = 13
* distances_down[4] = 1* number_of_paths_down[4] = 1
* (distances_down[4] does not include the node 4 itself)N - 1 - distances_down[4] = 8 - 1 - 1 = 6
* sod(2) - 1 - distances_down[4] - number_of_paths_down[4] = 13 - 1 - 1 - 1 = 10

If you remember this from the function we defined earlier, you will notice that the contribution of a child node to the two values distances_down and number_of_paths_down is n_paths + 1 and n_paths + s_paths + 1 respectively. Naturally, that is what we subtract to obtain the remaining tree.

sod(4) represents the sum of edges on all the paths originating at the node 4 in the tree above. Let’s see how we can find this out using the information we have calculated till now.

distances_down[4] represents the answer for the tree rooted at the node 4 but it only considers paths going to its successors, that is all the nodes in the tree rooted at 4. For our example, the successor of 4 is the node 6. So, that will directly add to the final answer. Let’s call this value own_answer . Now, let’s account for all the other paths.

4 --> 2 (1 edge)4 --> 2 --> 5 (1 + 1 edge)4 --> 2 --> 1 (1 + 1 edge)4 --> 2 --> 1 -->7 (1 + 2 edges)4 --> 2 --> 1 --> 7 --> 9 (1 + 3 edges)4 --> 2 --> 1 --> 7 --> 10 (1 + 3 edges)own_answer = 1
sod(4) = 1 + 1 + 2 + 2 + 3 + 4 + 4 = 17
sod(4) = own_answer + (N - 1 - distances_down[4]) + (sod(2) - 1 - distances_down[4] - number_of_paths_down[4]) = 1 + 6 + 10 = 17

Before you go bonkers and start doing this, let’s look at the code and bring together all of the things we discussed in the example above.

The recursive relation for this portion is as follows:

Did I just see “MEMOIZATION” in the code?

Yes, indeed you did!

Consider the following example tree:

The question asks us to find the sum of distances for all the nodes in the given tree. So, we would do something like this:

for i in range(N): ans.append(find_distances(N))

Men hvis du ser på træet ovenfor, vil det rekursive opkald til noden 5ende med at beregne svarene for alle knudepunkterne i træet. Så vi behøver ikke at genberegne svarene for de andre noder igen og igen.

Derfor ender vi med at gemme de allerede beregnede værdier i en ordbog og bruge det til yderligere beregninger.

I det væsentlige er rekursionen baseret på en knuds forælder, og flere noder kan have den samme forælder. Så forældrenes svar skal kun beregnes en gang og derefter bruges igen og igen.

Hvis det er lykkedes dig at læse artiklen indtil videre (ikke nødvendigvis i en strækning?), Er du fantastisk?

Hvis du fandt denne artikel nyttig, kan du dele så meget som muligt og sprede?. Skål!