You are on page 1of 12

1) Write about singly linked list.

The singly link list is nothing but the group of memory location in the computer memory (RAM) connected with each other using the pointer. The pointer is nothing but the location in the memory address space. The single data in list is called as nodes. Every node contains data and the pointer to the next data. Means one pointer in the node gives location of the next data. Suppose i have following data Memory location Data 1000 A 2000 B 3000 C Since the data A, B, C is stored in different memory locations, i can group it using singly link list. Means my first node contain data A, second B, & third C. The first node which contain data A will give the information about the next data or node (which is B) using the pointer. Means the pointer of A store the memory location of B (which is 2000).Like these the link will be continued, at last node since there is no next node or data. Hence the pointer contains NULL value means there is (dead end).When computer encounters NULL it stop travelling through the list. & there is a special pointer called START pointer which gives the memory location of first node. In about example the node which contains data C contain NULL value in pointer field. Singly linked lists contain nodes which have a data field as well as a next field, which points to the next node in the linked list.

A singly linked list whose nodes contain two fields: an integer value and a link to the next node

Singly linked lists, in fact, can only be traversed in one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heap sort. SINGLY LINKED LISTS vs. OTHER LISTS: While doubly linked and/or circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations. For one thing, a singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear list (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While one can adapt those recursive solutions for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases. Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists.

In particular, end-sentinel nodes can be shared among singly linked non-circular lists. One may even use the same end-sentinel node for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by nil or (), whose CAR and CDR links point to itself. Thus a Lisp procedure can safely take the CAR or CDR of any list. Indeed, the advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost. Doubly linked vs. singly linked: Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list (in case of the first node) or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures. SINGLY LINKED LIST: Our node data structure will have two fields. We also keep a variable firstNode which always points to the first node in the list, or is null for an empty list. record Node { data; // The data being stored in the node Node next // A reference to the next node, null for last node } record List { Node firstNode // points to first node of list; null for empty list } Traversal of a singly linked list is simple, beginning at the first node and following each next link until we come to the end: node := list.firstNode while node not null (do something with node.data) node := node.next The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it.

function insertAfter(Node node, Node newNode) // insert newNode after node newNode.next := node.next node.next := newNode

Inserting at the beginning of the list requires a separate function. This requires updating firstNode. function insertBeginning(List list, Node newNode) // insert node before current first node newNode.next := list.firstNode list.firstNode := newNode Similarly, we have functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element.

function removeAfter(node node) // remove node past this one obsoleteNode := node.next node.next := node.next.next destroy obsoleteNode function removeBeginning(List list) // remove first node obsoleteNode := list.firstNode list.firstNode := list.firstNode.next // point past deleted node destroy obsoleteNode

2) Write about binary search tree.

A binary search tree of size 9 and depth 3, with root 8 and leaves 1, 4, 7 and 13

A binary search tree (BST), which may sometimes also be called an ordered or sorted binary tree, is a Node-based binary tree data structure which has the following properties: The left subtree of a node contains only nodes with keys less than the node's key. The right subtree of a node contains only nodes with keys greater than the node's key. Both the left and right subtrees must also be binary search trees. Generally, the information represented by each node is a record rather than a single data element. However, for sequencing purposes, nodes are compared according to their keys rather than any part of their associated records. The major advantage of binary search trees over other data structures is that the related sorting algorithms and search algorithms such as in-order traversal can be very efficient. Binary search trees are a fundamental data structure used to construct more abstract data structures such as sets, multisets, and associative arrays. OPERATIONS: Operations on a binary search tree require comparisons between nodes. These comparisons are made with calls to a comparator, which is a subroutine that computes the total order (linear order) on any two values. This comparator can be explicitly or implicitly defined, depending on the language in which the BST is implemented. Make empty: This operation is mainly for initialization. Some programmers prefer to initialize the first element as one-node tree, but our implementation follows the recursive definition of trees more closely. It is more a simple routine as evidenced by fig: Search tree MakeEmpty(serachtree T) { If( T!= NULL) { MakeEmpty( T->Left ); MakeEmpty( T->Right ); Free( T ); } Return NULL; } Searching: Searching a binary search tree for a specific value can be a recursive or iterative process. This explanation covers a recursive method. We begin by examining the root node. If the tree is null, the value we are searching for does not exist in the tree. Otherwise, if the value equals the root, the search is successful. If the value is less than the root, search the left subtree. Similarly, if it is greater

than the root, search the right subtree. This process is repeated until the value is found or the indicated subtree is null. If the searched value is not found before a null subtree is reached, then the item must not be present in the tree.

<<declarations>>= struct bst_node** search(struct bst_node** root, comparator compare, void* data); <<search operation>>= struct bst_node** search(struct bst_node** root, comparator compare, void* data) { struct bst_node** node = root; while (*node != NULL) { int compare_result = compare(data, (*node)->data); if (compare_result < 0) node = &(*node)->left; else if (compare_result > 0) node = &(*node)->right; else break; } return node; }

Insertion: Insertion begins as a search would begin; if the root is not equal to the value, we search the left or right subtrees as before. Eventually, we will reach an external node and add the value as its right or left child, depending on the node's value. In other words, we examine the root and recursively insert the new node to the left subtree if the new value is less than the root, or the right subtree if the new value is greater than or equal to the root. Here's how a typical binary search tree insertion might be performed in C++:

<<declarations>>= void insert(struct bst_node** root, comparator compare, void* data); <<insert operation>>= void insert(struct bst_node** root, comparator compare, void* data) { struct bst_node** node = search(root, compare, data); if (*node == NULL) { *node = new_node(data); } }

Deletion: There are three possible cases to consider: Deleting a leaf (node with no children): Deleting a leaf is easy, as we can simply remove it from the tree. Deleting a node with one child: Remove the node and replace it with its child. Deleting a node with two children: Call the node to be deleted N. Do not delete N. Instead, choose either its In-order successor node or its in-order predecessor node, R. Replace the value of N with the value of R, then Delete R.

As with all binary trees, a node's in-order successor is the left-most child of its right subtree, and a node's inorder Predecessor is the right-most child of its left subtree. In either case, this node will have zero or one children. Delete it according to one of the two simpler cases above. Consistently using the in-order successor or the in-order predecessor for every instance of the two-child case can lead to an unbalanced tree, so good implementations add inconsistency to this selection. Running Time Analysis: Although this operation does not always traverse the tree down to a leaf, this is always a possibility; thus in the worst case it requires time proportional to the height of the tree. It does not require more even when the node has two children, since it still follows a single path and does not visit any node twice.

Deleting a node with two children from a binary search tree. The triangles represent subtrees of arbitrary size, each with its leftmost and rightmost child nodes at the bottom two vertices.

<<declarations>>= void delete(struct bst_node** node); <<delete operation>>= void delete(struct bst_node** node) { struct bst_node* old_node = *node; if ((*node)->left == NULL) { *node = (*node)->right; free_node(old_node); } else if ((*node)->right == NULL) { *node = (*node)->left; free_node(old_node); } else { delete node with two children } }

SORT: A binary search tree can be used to implement a simple but efficient sorting algorithm. Similar to heap sort, we insert all the values we wish to sort into a new ordered data structurein this case a binary search treeand then traverse it in order. FindMIN and FindMAX: Given a non-empty binary search tree (an ordered binary tree), return the minimum data value found in that tree. Note that it is not necessary to search the entire tree. A maxValue () function is structurally very similar to this function. This can be solved with recursion or with a simple while loop.

int minValue(struct node* node) {

Given a binary tree, compute its "maxDepth" -- the number of nodes along the longest path from the root node down to the farthest leaf node. The maxDepth of the empty tree is 0, the maxDepth of the tree on the first page is 3. int maxDepth(struct node* node) {

3) Write about AVL tree.

An AVL tree is a self-balancing binary search tree, and it was the first such data structure to be invented. In an AVL tree, the heights of the two child subtrees of any node differ by at most one. The AVL tree is named after its two Soviet inventors, G.M. Adelson-Velskii and E.M. Landis, who published it in. Their 1962 paper "An algorithm for the organization of information." The balance factor of a node is the height of its left subtree minus the height of its right subtree sometimes opposite and a node with balance factor 1, 0, or 1 is considered balanced. A node with any other balance factor is considered unbalanced and requires rebalancing the tree. The balance factor is either stored directly at each node or computed from the heights of the subtrees. AVL trees are often compared with red-black trees because they support the same set of operations and because red-black trees also take O (log n) time for the basic operations. Because AVL trees are more rigidly balanced, they are faster than red-black trees for lookup intensive applications. OPERATIONS: Basic operations of an AVL tree involve carrying out the same actions as would be carried out on an unbalanced binary search tree, but modifications are preceded or followed by one or more operations called tree rotations, which help to restore the height balance of the subtrees. LOOK UP: Lookup in an AVL tree is performed exactly as in an unbalanced binary search tree. Because of the height-balancing of the tree, a lookup takes O (log n) time. No special actions need to be taken, and the tree's structure is not modified by lookups. (This is in contrast to splay tree lookups, which do modify their tree's structure.) If each node additionally records the size of its subtree (including itself and its descendants), then the nodes can be retrieved by index in O (log n) time as well. INSERTION:

Pictorial description of how rotations cause rebalancing tree, And then retracing one's steps toward the root updating the Balance factor of the nodes. The numbered circles represent the Nodes being balanced. The lettered triangles represent subtrees Which are themselves balanced BSTs.

After inserting a node, it is necessary to check each of the node's ancestors for consistency with the rules of AVL. For each node checked, if the balance factor remains 1, 0, or +1 then no rotations are necessary. However, if the balance factor becomes 2 then the subtree rooted at this node is unbalanced. If insertions are performed serially, after each insertion, at most one of the following cases needs to be resolved to restore the entire tree to the rules of AVL. There are four cases which need to be considered, of which two are symmetric to the other two. Let P be the root of the unbalanced subtree, with R and L denoting the right and left children of P respectively. Right-Right case and Right-Left case: If the balance factor of P is -2 then the right subtree outweighs the left subtree of the given node, and the balance factor of the right child (R) must be checked. The left rotation with P as the root is necessary. If the balance factor of R is -1, a double left rotation (with respect to P and then R) is needed (Right-Right case). If the balance factor of R is +1, two different rotations are needed. The first rotation is a right rotation with R as the root. The second is a left rotation with P as the root (Right-Left case). Left-Left case and Left-Right case: If the balance factor of P is +2, then the left subtree outweighs the right subtree of the given node, and the balance factor of the left child (L) must be checked. The right rotation with P as the root is necessary. If the balance factor of L is +1, a double right rotation (with respect to P and then L) is needed (Left-Left case). If the balance factor of L is -1, two different rotations are needed. The first rotation is a left rotation with L as the root. The second is a right rotation with P as the root (Left-Right case).

DELETION: If the node is a leaf or has only one child, remove it. Otherwise, replace it with either the largest in its left subtree (in order predecessor) or the smallest in its right subtree (in order successor), and remove that node. The node that was found as a replacement has at most one subtree. After deletion, retrace the path back up the tree (parent of the replacement) to the root, adjusting the balance factors as needed. As with all binary trees, a node's in-order successor is the left-most child of its right subtree, and a node's inorder predecessor is the right-most child of its left subtree. In either case, this node will have zero or one children. Delete it according to one of the two simpler cases above.

In addition to the balancing described above for insertions, if the balance factor for the tree is 2 and that of the left Subtree is 0; a right rotation must be performed on P. The mirror of this case is also necessary.

The retracing can stop if the balance factor becomes 1 or +1 indicating that the height of that subtree has remained unchanged. If the balance factor becomes 0 then the height of the subtree has decreased by one and the retracing needs to continue. If the balance factor becomes 2 or +2 then the subtree is unbalanced and needs to be rotated to fix it. If the rotation leaves the sub trees balance factor at 0 then the retracing towards the root must continue since the height of this subtree has decreased by one. This is in contrast to an insertion where a rotation resulting in a balance factor of 0 indicated that the sub trees height has remained unchanged. The time required is O (log n) for lookup, plus a maximum of O (log n) rotations on the way back to the root, so the operation can be completed in O (log n) time.

COMPARISION TO OTHER STRUCTURES: Both AVL trees and red-black trees are self-balancing binary search trees, so they are very similar mathematically. The operations to balance the trees are different, but both occur in O(log n) time. The real difference between the two is the limiting height. For a tree of size: n An AVL tree's height is strictly less than:

Where is the golden ratio. A red-black tree's height is at most AVL trees are more rigidly balanced than red-black trees, leading to slower insertion and removal but faster retrieval. ROTATIONS: Single rotation:

Balancing an AVL Tree with a Single (LL) Rotation

Figure (a) shows an AVL balanced tree. E.g., the balance factor for node A is zero, since its left and right subtrees have the same height; and the balance factor of node B is +1, since its left subtree has height h+1 and its right subtree has height h. Suppose we insert an item into , the left subtree of A. The height of can either increase or remain the same. In this case we assume that it increases. Then, as shown in Figure (b), the resulting tree is no longer AVL balanced. Notice where the imbalance has been manifested--node A is balanced but node B is not. Balance can be restored by reorganizing the two nodes A and B, and the three subtrees, , , and shown in Figure (c). This is called an LL rotation, because the first two edges in the insertion path from node B both go to the left. There are three important properties of the LL rotation: 1. The rotation does not destroy the data ordering property so the result is still a valid search tree. Subtree 2. 3. remains to the left of node A, subtree remains between nodes A and B, and , as

subtree remains to the right of node B. After the rotation both A and B are AVL balanced. Both nodes A and B end up with zero balance factors. After the rotation, the tree has the same height it had originally. Inserting the item did not increase the overall height of the tree!

Double rotations:

The tree can be restored by performing an RR rotation at node A, followed by an LL rotation at node C. The tree which results is shown in Figure (c). The LL and RR rotations are called single rotations. The combination of the two single rotations is called a double rotation and is given the name LR rotation because the first two edges in the insertion path from node C both go left and then right. Obviously, the left-right mirror image of the LR rotation is called an RL rotation. An RL rotation is called for when the root becomes unbalanced with a negative balance factor but the right subtree of the root has a positive balance factor. Double rotations have the same properties as the single rotations: The result is a valid, AVL-balanced search tree and the height of the result is the same as that of the initial tree. Clearly the four rotations, LL, RR, LR, and RL, cover all the possible ways in which any one node can become unbalanced. But how many rotations are required to balance a tree when an insertion is done? The following theorem addresses this question: Theorem: When an AVL tree becomes unbalanced after an insertion, exactly one single or double rotation is required to balance the tree. ExtbfProof: When an item, x, is inserted into an AVL tree, T, that item is placed in an external node of the tree. The only nodes in T whose heights may be affected by the insertion of x are those nodes which lie on the access path from the root of T to x. Therefore, the only nodes at which an imbalance can appear are those along the access path. Furthermore, when a node is inserted into a tree, either the height of the tree remains the same or the height of the tree increases by one.

Balancing an AVL Tree with a Double (LR) Rotation

Consider some node c along the access path from the root of T to x. When x is inserted, the height of c either increases by one, or remains the same. If the height of c does not change, then no rotation is necessary at c or at any node above c in the access path. If the height of c increases then there are two possibilities: Either c remains balanced or an imbalance appears at c. If c remains balanced, then no rotation is necessary at c. However, a rotation may be needed somewhere above c along the access path. On the other hand, if c becomes unbalanced, then a single or a double rotation must be performed at c. After the rotation is done, the height of c is the same as it was before the insertion. Therefore, no further rotation is needed above c in the access path.

You might also like