COMPUTATIONAL GEOMETRY AND COMPUTER GRAPHICS IN C++ PDF

adminComment(0)

Computational Geometry and Computer Graphics in C++ – venarefeane.cf C++. Free Books Download PDF / Free Books Online / Free eBook Download. Computational geometry and computer graphics in C++ / by Michael J. Laszlo. p. cm. Includes bibliographical references and index. ISBN 1. Computational Geometry and Computer Graphics in C++ Computational Geometry in C (Cambridge Tracts in Theoretical Computer Science) · Read more .


Computational Geometry And Computer Graphics In C++ Pdf

Author:SALOME ARCIOLA
Language:English, French, Portuguese
Country:Jordan
Genre:Health & Fitness
Pages:205
Published (Last):05.10.2015
ISBN:879-4-35128-748-7
ePub File Size:24.65 MB
PDF File Size:20.60 MB
Distribution:Free* [*Register to download]
Downloads:25002
Uploaded by: EDDIE

If you are looking for an interesting reading book for you, Computational Geometry and. Computer Graphics in C++ PDF site can make you addicted when. LASZLO PDF. Computational Geometry And Computer Graphics In C++ By Michael J. Laszlo. Let's review! We will certainly typically discover. in C++ PDF by Michael J. Laszlo PDF Online free. This book provides an accessible introduction to methods in computational geometry and computer graphics.

The decision tree represents all possible computations for all inputs of a given size there is a different decision tree for every input size.

Смотри также

Each node of the decision tree corresponds to a probe. The pair of edges that leaves each node corresponds to a branching instruction-based on the outcome of the probe, a given computation follows one of the two edges to the next node. The external nodes of the decision tree-the terminating nodes which no edges exit-correspond to solutions.

Figure 2. The decision tree is in fact a binary tree with a probe associated with each circular internal node and a final outcome associated with each square external node.

See Section 3. Each path from the topmost node or root down to some external node represents a possible computation, and the path's length equals the number of probes performed by the computation. Here the length of a path equals the number of edges it contains. Hence the length of the longest such path in the decision tree represents the number of probes performed in the worst case.

What is the shortest this largest path can be? Let us show that in any binary tree containing n external nodes, there exists some path with length at least log n. Where the height of a binary tree is defined as the length of some longest path from root to external node, we can state what we wish to show as follows: Theorem 1 A binary tree containing n external nodes has height at least log 2 n. We can show Theorem I by induction on n.

For the inductive step, assume as the inductive hypothesis that the theorem holds for all binary trees with fewer than n external nodes.

We must show that it holds for every binary tree T containing n external nodes. The root node of T attaches to two smaller binary trees T1 and T2 containing n I and n2 external nodes, respectively.

Any algorithm for problem SEARCH corresponds, on input size n, to a decision tree which contains at least n external nodes. Theorem I informs us that the decision tree has height not less than log 2 n. Hence the algorithm performs at least log2 n probes for some inputs of size n. Binary search is indeed an optimal solution to the problem. The idea is to solved no faster than PA, thereby transfering PA 's lower bound to two problems are related in this way, we exhibit an efficient algorithm PB makes use of the show that PB can be P2.

To show that the A for PA which calls a 20 Chap. Analysis of Algorithms procedure solving problem P8 no more than a constant number of times. Let us consider an example before formalizing these ideas. Given an unordered set of n integers, decide whether any two are equal.

Given an unordered set of n integers, arrange the integers in nondecreasing order. For both problems we will assume a comparison-basedmodel of computation: The only operations we count are comparisons, in which we compare two integers in constant time one step. To decide whether some integer occurs more than once in a given array of n integers, first sort the array and then step down the array while comparing all pairs of consecutive integers. An integer occurs twice in the original array if and only if it occupies consecutive positions in the now-sorted array.

Sorting would take o n log n time, and checking consecutive integers in the now-sorted array would take 0 n C o n log n time.

Let us formalize these ideas. A uses some hypothetical algorithm B for PB as a function call some constant number of times, and 2. A runs in O r n time, where each call to B costs one step. In effect, we are augmenting the model of computation for problem PA by a constanttime operation B which solves problem P8. We are free to assume operation B without specifying or even having knowledge of a particular solution for problem PB.

Then, assuming that r n E o f n , problem PB must also have lower bound Q f n. Were there to exist some algorithm B for P8 which runs in o f n time, algorithm A would also run in off n time if it used algorithm B, and A would then violate the lower bound on problem PA. This argument Sec. A r n -time reduction from problem PA to problem PB. Were this assumption not to hold, algorithm A would run in Q ff n time regardless of algorithm B's performance, and PA 's known lower bound would not be violated.

This is the sense in which the reduction must be efficient to be useful see Figure 2. Let us return to our example. SORT has an upper bound of 0 n log n ; merge sort is but one algorithm which solves the problem in 0 n logn time. More generally, suppose that problem P8 has known upper bound O g n and we can exhibit a r n -time reduction from problem PA to problem P8. Then, assuming r n E 0 g n , problem PA must have the same upper bound 0 g n.

Also recommended is the paper [82], which explores amortized analysis and presents several examples. Knuth employed 0-notation and asymptotic analysis of algorithms as early as [48] and endorsed conscientious use of 0-, Q-, 9-, and o-notations three years later in [49]. Both references discuss the history of asymptotic notation.

Mathematical induction is defined and discussed in [57] and is made the basis for algorithm design and analysis in [56].

Editorial Reviews

Discuss the advantages and disadvantages of measuring the running time of a program by timing it while it executes on a computer. Is this a reliable performance measure? Implement sequentialSearch and binarySearch and graph their execution times as input grows large. Since both programs realize worst-case performance when search fails, you will probably want to use search keys that do not occur in the input Chap.

Does your graph bear out the worst-case running time of 0 n for sequential search and 0 log n for binary search? Implement the sorting algorithms insertionSort [which runs in 0 n 2 time], selectionSort [O n2 time], and mergeSort [O nlogn time], and graph their execution times as input grows large.

Does your graph bear out their running times? These programs are discussed at the beginning of Chapters 5, 6, and 8. Use the fact that log n! Given a sorted array a of n distinct integers and a search key x, report whether or not x occurs in a. Although only two outcomes are possible, argue that the decision tree must nonetheless contain at least n external nodes. The data structures an algorithm uses affect the algorithm's efficiency, as well as the ease with which it can be programmed and understood.

This is what makes the study of data structures so important. One example is the integers, together with the operations for manipulating them: Other examples include floating-point numbers, characters, pointer types, and reference types. The predefined data structures can be used "as is" in programs, or combined through such devices as classes and arrays to form data structures of greater complexity.

In this chapter we present and implement the data structures we will use: These data structures are elementary yet powerful, and the implementations we provide are standard and practical. We will not attempt a comprehensive treatment of data structures in this chapter; rather, our primary motivation is to provide the data structures needed for the programs to follow.

A data structure consists of a storage structure to hold the data, and methods for creating, modifying, and accessing the data. More formally, a data structure consists of these three components: A set of operations for manipulating specific types of abstract objects 2. A storage structure in which the abstract objects are stored 3. An implementation of each of the operations in terms of the storage structure. Component 1 of the definition-a set of operations on abstract objects-is called an abstract data type, or ADT.

Components 2 and 3 together make up the data structure's implementation. For example, the array ADT supports the manipulation of a set of values of the same type, through operations for accessing and modifying values specified by an index. An abstract data type specifies what a data structure does-what operations it supports-without revealing how it does it.

While crafting a program, the programmer is thereby free to think in terms of what abstract data types the program requires and can postpone the work of implementing them.

This programming practice of separating an algorithm from its data structures so they can be considered separately is called data abstraction. Data abstraction distinguishes different levels of abstract thought. The stack ADT encourages thought at the level of stack operations while postponing the lower-level question of how to implement stacks. Use of abstract data types also encourages modular programming, the practice of breaking programs into separate modules with well-defined interfaces.

Modularity has numerous advantages. Modules can be written and debugged apart from the rest of the program, by another programmer or at another time. Modules are reusable, so they can often be loaded from a library or copied from another program. In addition, a module can be replaced by another module that is functionally equivalent to, but more efficient, robust, or, in some other sense, better than the original.

Despite these advantages, not every data structure should be treated as an abstract data type. An array ADT provides index-based access and update operations for manipulating sets, but there is often no advantage to formulating arrays this abstractly: Doing so circumvents the familiar view of an array as a contiguous block of memory and introduces an often unnecessary level of abstraction.

It is also inefficient, since the additional function calls may incur an overhead cost, and efficient pointer methods that exploit the close tie between pointers and arrays are forfeited. Our primary goal in this chapter will be to present the abstract data types used by the algorithms in this book and to implement them efficiently.

Familiarity with these ADTs will enable us to describe our algorithms at a higher, more abstract level, in terms of abstract operations. Familiarity with how these ADTs are implemented-besides being of interest in its own right-will enable us to implement and run the algorithms which employ them. Moreover, knowing how an ADT is implemented will allow us to tinker with a data structure when it only approximately meets our needs. The items are stored in the nodes of the linked list, ordered sequentially in list order.

Each node possesses a pointer-often known as a link-to the next node. The last node is distinguished by a special value in its link field the null link in Figure 3. For handling lists, the linked list has two advantages over the array. First, rearranging the order of items in a linked list is easy and fast, usually accomplished by updating a small number of links e.

Contrast this with insertion of an item into an array a representing a list of n items. To insert the item into position i, each of the items in a [ i] through a [ n- 1 ] must be shifted one position to the right to make a "hole" for the new item in position i. Unlike arrays, linked lists are dynamic-they are able to shrink and grow in size during their lifetime. With linked lists it is unnecessary to safeguard against running out of space by preallocating an excessive amount of memory.

Computational Geometry and Computer Graphics in C++

There are several kinds of linked lists. Each node of a singly linked list links to the next node, called its successor. Each node of a doubly linked list links both to the next node and to the previous node, its predecessor. In a circular linked list, the last node links to the first node; if the circular linked list is doubly linked, the first node also links to the last node. For example, Figure 3. We will concentrate on circular doubly linked lists since they are best suited for the algorithms covered in this book; for brevity, we will usually refer to them simply as linked lists.

We will often use them, for instance, to represent polygons, where each node of the linked list corresponds to a vertex of the polygon, and the nodes are ordered as the vertices around the polygon. In a pointer-based implementation of the linked list, a node is an object of class Node: The new node represents a single-node linked list. Data Structures W[ Figure 3. A singly linked list. A circular doubly linked list. Node void: In a constructor, this points to the object being allocated.

Accordingly, in discussions about member functions, we will often refer the receiver object as this object. The destructor -Node is responsible for deallocating this object. It is declared virtual within the class definition so that derived objects-objects of classes derived from class Node-are correctly deallocated: Nodes can also be removed from linked lists. Member function remove removes this node from its linked list Figure 3.

It returns a pointer to this node so it can later be deallocated: Splicing achieves different results depending on whether a and b belong to the same linked list.

If they do, the linked list is split into two smaller linked lists. Alternatively, if a and b belong to different linked lists, the two linked lists are joined into one larger linked list. We must also update the appropriate predecessor links: We link the new successor to a back to a, and the new successor to b back to b. The operation is depicted in Figure 3. The figure indicates that splice is its own inverse: Splicing nodes a and b in the left diagram yields the right diagram, and splicing a and b once again in the right diagram produces the left diagram.

In the following implementation of member function splice, this node plays the role of node a, and the function's argument that of node b. We introduce variable a into the Figure 3. Removing a node from its linked list. Splicing nodes a and b. Furthermore, splicing a single-node linked list b to node a of some other linked list effectively inserts b after a.

This suggests that inserting a node into a linked list and removing a node from a linked list are actually special cases of splice. Indeed this is the case-member functions insert and remove are provided for convenience.

This is easily verified by examining the implementation of splice. A list is an ordered set of finitely many items. The length of a list equals the number of items it contains; a list of length zero is called an empty list.

In our view of a list, every item in a list occupies one position-the first item is in the first position, the second item is in the second position, and so forth.

There is in addition a head position which simultaneously occurs before the first position and after the last position. A List object provides access to its items through a window, which at any given time is located over some position in the list. Most list operations refer to either the window or the item in the window. For instance, we can obtain the item in the window, advance the window to the next or previous position, or move the window to the first position or last position in the list.

We can also do such things as remove from the list the item in the window or insert a new item into the list, just after the window. Figure 3. We will use linked lists to implement class List. Each node corresponds to a position in the list and contains the item stored at that position the node corresponding to the head position is called the header node. A node is represented by an object of class ListNode, which is derived from class Node.

Class ListNode possesses a data member -val, which points to the actual item. A list is a collection of items of some given type. Yet there is no need to build a specific item type into the definitions of classes ListNode or List, for the list operations behave the same regardless of item type. For this reason we define these classes as class templates. The item type is made a parameter of the class template.

Later, when we need 29 Sec. The structure of a list of length seven. The items in the list occur in positions I through 7; the head position 0 occurs between the first and last positions.

The window, indicated by the square, is currently over position 2. Class template ListNode is defined as follows: To declare an instance of ListNode, we supply a type for parameter T. For instance, the declaration ListNode a, b; declares a and b as ListNode objects each containing a pointer-to-int.

The constructor ListNode is defined like this: We will not define a destructor for class ListNode. Whenever a ListNode object is deleted, the base class's destructor Node: Note that the item pointed to by data member Lis tNode: It would be safe to deallocate the item only if it were known to have been allocated with new, and there is no guarantee of this.

Let us turn to the definition of class template List. Class List contains three data members: The class template is defined as follows: Data Structures template class List private: Thus the declaration List p; declares p to be a list of pointer-to-Polygons, whereas the declaration List q; is illegal.

None of the three functions moves the window. Function insert inserts a new item after the window and returns the item: List 1 is made empty in the process. This list is returned: Member function remove removes the item in the window, moves the window to the previous position, and returns the just-removed item. The operation does nothing if the window is in the head position: The function does nothing if the window is in the head position.

Each returns the item stored in the window's new position.

Note that class Sec. Each returns the item stored in the window's new position: Data Structures template bool Liat:: The first example, function template arrayToList, loads the n items in array a into a list, which it then returns: For the second example, function template leastItem returns the smallest item in list s.

Two elements are compared using the comparison function cmp, which returns -1, 0, or I if its first argument is less than, equal to, or greater than its second argument: For instance, the following fragment prints the string ant: List a; s. There are also list-oriented data structures that restrict access to items, and one of the most important is the pushdown stack, or simply stack. The stack limits access to that item most recently inserted. For this reason, stacks are also known as last-in-first-outlists, or LIFO lists.

The two basic stack operations are push and pop. The operation push inserts an item into the stack, and pop removes from the stack the last item pushed. The words stack, push, and pop suggest a helpful picture, that of objects stacked one on top of the next in which only the topmost object is accessible.

The push operation pushes inserts a new item onto the top, and the pop operation pops removes and returns the top object. Other stack operations include empty, which returns TRUE just if the stack is empty, and operations for peeking at select items on the stack without removing them see Figure 3. One simple implementation of a stack uses an array. A stack of n items is stored in elements s [0] through s [n-1] of some array s, and the number of items n is stored in some integer variable top.

Array element s [ 0 contains the bottom item, and s [top-1 the top item. The list of items grows toward higher indices as pushes are performed and shrinks toward lower indices as pops are performed. The only problem with this implementation is that it is not dynamic-the length of the array limits the size of the stack.

We will pursue an implementation of stacks based on the List class template of the previous section. Our stacks will then be dynamic since List objects are dynamic. The class template Stack contains aprivate datamembers which points to the List object representing the stack. The list is ordered from the top of the stack to the bottom; in particular, the top item of the stack is in the first position of the list, and the bottom item of the stack is in the last position of the list.

H push 2 Figure 3. Stack void ; -Stack void ; void push T v ; T pop void ; bool enpty void ; int size void ; T top void ; T nextToTop void ; T bottom void ; Implementation of the member functions is straightforward.

The constructor Stack allocates a List object and assigns it to data member s: Stack void s new List The destructor -Stack deallocates the List object pointed to by data member s: None of the three peek operations changes the state of the stack nothing is popped or pushed.

The class provides a public interface consisting of accessible operations: A stack is manipulated only through this public interface. The storage structure the list is hidden in the private part of the class definition Chap. Data Structures 38 and cannot be modified or otherwise accessed except through the interface. Programs using stacks do not need to know anything about how the stacks are implemented; for example, function reverse works whether its stack is implemented using a list or an array.

This implementation is much simpler than one based directly on lower-level building blocks such as linked lists or arrays. The danger in implementing an ADT in terms of a second ADT is that the first ADT inherits the performance characteristics of the second, whose implementation may be inefficient. But in this case, having implemented the list ADT ourselves, we understand its performance and can show easily that each of the operations supported by Stack runs in constant time.

Searching for a particular item x requires stepping down the list while deciding at each item whether it matches x. A search may require visiting many items or, if the list does not contain x, every item to determine that this is the case.

Even if the list is sorted, a search must visit every item preceding the position where x belongs. Binary trees provide a more efficient way to search. A binary tree is a structured collection of nodes.

The collection can be empty, in which case we have an empty binary tree. If the collection is not empty, it is partitioned into three disjoint sets of nodes: In Figure 3. The binary tree of Figure 3. The size of a binary tree is the number of internal nodes it contains. The external nodes correspond to empty binary a Figure 3. In some contexts the external nodes are labeled, and in others they are not referred to at all and are thought of as empty binary trees in Figure 3.

A metaphor based on genealogy provides a convenient way to refer to specific nodes within a binary tree. Node p is the parentof node n just if n is a child of p.

Two nodes are siblings if they share the same parent. Given two nodes nI and nk such that n k belongs to the subtree rooted at n 1, node nk is said to be a descendant of n I, and nX an ancestor of nkThere exists a unique path from n1 down to each of its descendants nk: The length of the path is the number of edges it contains k - 1. For example, in Figure 3. The depth of a node n is defined recursively: The height of a node n is also defined recursively: The height of node n equals the length of some longest path from n down to an external node in n's subtree.

The height of a binary tree is defined as the height of its root node. For example, the binary tree of Figure 3. In a pointer-based implementation of binary trees, nodes are objects of class TreeNode: The class constructor creates a binary tree of size one-the sole internal node has two empty children, each represented by NULL: Searching encompasses such operations as finding a given item in a set of distinct items, locating the smallest or largest item in the set, and deciding whether the set contains a given item.

To search within a binary tree efficiently, its items must be arranged in a particular way. Specifically, a binary tree is called a binary search tree if its items are organized as follows: For each item n, all the items in the left subtree of n are less than n, and all the items in the right subtree of n are greater than n. In general, there exist numerous binary search trees of different shape for any given set of items.

It is implicit that items belong to a linearorder and, consequently, that any two can be compared. Examples of linear orders include the integers and the real numbers under Figure 3. Three binary search trees over the same set of items. Visit functions are not bound to search trees; different visit functions can be applied to the items in the same search tree. The class contains data member root, which points to the root of the binary search tree a TreeNode object , and data member cmp, which points to a comparison function: Data Structures The class destructor deletes the entire tree by invoking the root's destructor: The path from the root node down to val is called the searchpath for val.

Member function f ind implements this search algorithm, returning a pointer to the item that is sought, or NULL if no such item exists in the search tree: Initially, when we start at the root, the field includes every item in the search tree. In general, when at node n, the field consists of the descendants of n. The process continues until either val is located or no candidates remain, implying that val does not occur in the search tree.

To find the smallest item in a search tree, we start at the root and repeatedly follow left-child links until reaching a node n whose left child is empty-node n contains the smallest item. We can also view this process as a tournament. When at node n, the field of candidates consists of the descendants of n. Member function inorder performs a special kind of traversal known as an inordertraversal.

The strategy is to first inorder traverse the left subtree, then visit the root, and finally inorder traverse the right subtree. We visit a node by applying a visit function to the item stored in the node. Member function inorder serves as the driver function. It invokes private member function -inorder, which performs an inorder traversal from node n and applies function visit to each item reached.

Indeed, inorder traversal of any binary search tree visits its items in increasing order. Therefore, n is visited in the correct position. Since n is an arbitrary node, the same holds for every node.

Member function inorder provides a way to list the items stored in a binary search tree in sorted order. For example, if a is a SearchTree of strings, we can print the strings in lexicographic order with the instruction a. Here the visit function printString might be defined like this: Neither the predecessor of the first node visited nor the successor of the last node visited are defined in a binary search tree, these nodes hold the smallest and largest items in the tree, respectively.

In addition to maintaining a pointer n to the current node, we maintain a pointer p to n's parent. Thus when n reaches some external node, p points to the node that is to become the new item's parent.

To perform the insertion, we allocate a new node to hold the new item and then link parent p to this new node Figure 3. Member function insert inserts item val into this binary search tree: Inserting an item into a binary search tree. Removing a node that has at most one nonempty child is easy: We link the node's parent to this child. However, things are more difficult if the node to be removed has two nonempty children: The node's parent can link to one of the children, but what do we do with the other child?

The solution is not to remove the node from the tree; rather, we replace the item it contains by the item's successor and then remove the node containing this successor.

To remove an item from a search tree, we first zigzag along the item's search path, from the root down to the node n that contains the item. At this point, three cases illustrated in Figure 3. Node n has an empty left child. In this case replace the link down to n stored in n's parent, if any by a link to n's right child. Node n has a nonempty left child but an empty right child Replace the link down to n by a link to n's left child.

Node n has two nonempty children.

Find the successor to n call it m , copy the data item stored in m into node n, and then recursively remove node m from the search tree. It is important to observe that a binary search tree results in each case. Consider case 1.

If node n to be removed is a left child, the items stored in n's right subtree are less than the item in n's parent p. When n is removed and its right subtree is linked to p, the items stored in p's new left subtree are, of course, still less than the item in p. Since no other links are changed, the tree remains a binary search tree. The argument is symmetric if node n is a right child, and trivial if n is the root. Case 2 is argued similarly. In case 3, the item v stored in node n is overwritten by the next-larger item stored in node mn call it w , and then w is removed from the tree.

Laszlo M.J. Computational Geometry and Computer Graphics in C++

In the binary tree that results, the values in n's left 46 Chap. The three cases that can arise when removing an item from a binary search tree.

Moreover, the items in n's right subtree are greater than w since 1 they are greater than v, 2 no item in the binary search tree lies between v and w, and 3 w was removed from among them. Observe that in case 3, node in must exist since n's right subtree is nonempty.

Furthermore, the recursive call to remove m cannot lead to a regress of recursive calls-since node m has no left child, case I applies when it gets removed. Observe that inorder traversal of each successive binary tree visits the nodes in increasing order, verifying that each is in fact a binary search tree.

Member function remove is the public member function for removing the node containing a given item. It calls private member function -remove, which does the actual work: A sequence of item removals.

Remove 8 from the binary tree. Remove 5. Remove 3.

When the node to be deleted old is reached, n names the link field in old's parent which contains the link down to old. Member function removeMin removes the smallest item from this search tree and returns it: The idea is to insert all the items into a search tree and then iteratively remove the smallest item until all items have been removed. Program heapSort sorts an array s of n items using comparison function cmp: Data Structures 3.

Given an item in a binary search tree, report the next larger or next smaller item. Although inorder traversal yields all the items in increasing order, it does not help us move efficiently from an arbitrary item to its successor or predecessor.

Computational Geometry in C

In this section we cover braided binary search trees, which are binary search trees with a linked list threaded through the nodes in increasing order.

For brevity, we will refer to a braided binary search tree as a braidedsearch tree, and the linked list that threads through it as a braid. We will implement braided search trees with the class BraidedSearchTree. This class differs from class SearchTree of the previous section in three significant ways. First, a BraidedSearchTree object possesses a linked list-the braid-which links each node to its successor and predecessor.

Second, a BraidedSearchTree object maintains a window which is at all times positioned over some item in the tree. The window serves the same purpose as it does in class List: Many operations refer to the window or to the item in the window. Third, member root of a BraidedSearchTree points to a headernode, a "pseudoroot node" whose right child is the actual root of the braided search tree.

Along the braid, the node containing the smallest item in the tree follows the header node, and the node containing the largest item precedes the header node.

Hence the header node corresponds to a head position which simultaneously occurs before the first position and after the last position Figure 3. Triangulating Monotone Polygons. Incremental Selection. Selection Sort. Finding Convex Hulls: Gift-Wrapping. Finding Complex Hulls: Graham Scan. Intersection of Convex Polygons. Finding Delaunay Triangulations.

Plane-Sweep Algorithms. Finding the Intersections of Line Segments. Contour of the Union of Rectangles. Decomposing Polygons into Monotone Pieces. Divide-and-Conquer Algorithms. Merge Sort. Computing the Intersection of Half-Planes. Furthermore, each student is required to complete a video-lecture feedback questionnaire for each of the individual videos in the unit.

Note that a separate form is required for each video in the unit. So, if a unit consists of N videos, the student needs to submit N forms. The completed questionnaires must be submitted to the tutorial TA at the start of the tutorial session for the tutorial section in which the student is registered.

Late video-lecture feedback questionnaire forms will not be accepted and will receive a mark of zero. The correspondence between tutorial sessions and video-lecture units is as specified in the Tutorial Schedule section. Each of Assignments P0, P1, P2, P3, and P4 is associated with a particular unit in the video lectures, which is in turn associated with a particular tutorial session. Each of these assignments must be submitted to GitHub Classroom prior to the start of the tutorial session following the one in which the unit associated with the assignment is covered.In three dimensions they include classifying a point relative to a plane and finding the intersection of a line and a triangle.

An integer occurs twice in the original array if and only if it occupies consecutive positions in the now-sorted array. Polygon triangulation by finding diagonals. Moreover, the items in n's right subtree are greater than w since 1 they are greater than v, 2 no item in the binary search tree lies between v and w, and 3 w was removed from among them. Case 2 is argued similarly. A computational problem is framed by a problem statement, which both characterizes all legal inputs and expresses the output as a function of these legal inputs.