CSET 3150: Midterm
CSET3150fall2009midtermsolution
CSET 3150 Advanced Programming Midterm Exam
Posted: Oct. 15, 2009
Due: 11:00pm Oct. 16, 2009
Total Score: 100
Course Number: _3150002_________
Name: ___Andrew Lechlak__________
________________________________________________________________________
PLEASE READ
You have 9 questions in total, with points indicated on each. Question 9 is extra credit, which can be added to your total score if it’s less than 100.
This is a takehome exam. You have oneday’s time to finish it.
Submit your solution by 11:00PM on Oct. 16, 2009. Late submission will not be accepted. CSET3150001 students submit your solution using email. CSET3150002 students submit using WebCT.
Typeset your answer and make sure to put down your name and course number (CSET 3150001 or CSET 3150002) on your submission.
Good luck!
________________________________________________________________________
Q1. (8points)
Are the following statements true or false?
 Merge sort has the worst case running time Θ(n^{2}). False
 Insertion sort has the best case running time Θ(n log n). False
 Counting sort has better running time than quick sort under certain conditions. True
 Any comparison sort algorithm requires Ω(n log n) comparisons in the worst case. True
Q2. (12points)
Rank the following functions in order of growth. Fill in each box with a number 1, 2, 3, 4, 5 or 6. (1 indicates the smallest value, while 6 indicate the largest).
5  2  4  3  6  1 
Q3. (15points)
Describe the DivideandConquer method for the design of algorithms, and show two algorithms that use this method to yield efficient solutions.
The DivideandConquer method is based on a multibranched recursion, or a recursion with sub arrays. It breaks down the problem into sub problems until it can be solved directly. It then combines the sub problems ( solved ) and gives you the solution to the original problem.
Two Algorithms that use this are Merge Sort and Quick Sort.
Merge Sort
 Divide the list into two smaller lists of about equal sizes
 Sort each smaller list recursively
 Merge the two sorted lists to get one sorted list
Merge Sort – Psuedocode
function merge(left,right)
var list result
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
append first(left) to result
left = rest(left)
else
append first(right) to result
right = rest(right)
end while
if length(left) > 0
append left to result
else
append right to result
return result
MERGESORT(A, p, r)
1 if p < r
2 then q ← ⌊(p + r)/2⌋
3 MERGESORT(A, p, q)
4 MERGESORT(A, q + 1, r)
5 MERGE(A, p, q, r)
Quick Sort
 Divide step: Pick any element (pivot) v in S
 Partition S – {v} into two disjoint groups
 S1 = {x S – {v}  x v}
 S2 = {x S – {v}  x v}
 Conquer step: recursively sort S1 and S2
 Combine step: combine the sorted S1, followed by v, followed by the sorted S2
Quick Sort – Psuedocode
function partition(array, left, right, pivotIndex)
pivotValue := array[pivotIndex]
swap array[pivotIndex] and array[right] // Move pivot to end
storeIndex := left
for i from left to right // left ≤ i < right
if array[i] ≤ pivotValue
swap array[i] and array[storeIndex]
storeIndex := storeIndex + 1
swap array[storeIndex] and array[right]// Move pivot place
return storeIndex
Input: an array A[p, r]
Quicksort (A, p, r) {
if (p < r) {
q = Partition (A, p, r)
//q is the position of the pivot element
Quicksort (A, p, q1)
Quicksort (A, q+1, r)
}
}
Q4. (15points). Solve the following recurrences:
1)
2)
3)
a = 2 ; b = 4; c = 1
We have log_{b}a = log_{4}2 = .5 < 1 = c,
Case 1 applies. Therefore, T(n) = Θ(n).
Master Theorem
2) => Linear Search // suppose to memorize this
T_{0} = T(1) + 1 = 0 => Induction
T_{1} = T(0) + 1 = 1 => Induction
T_{n} = T_{n1} + 1
T_{n }= n
T(n) = O(n)
Recurion proves it is the previous answer + 1. It could also be simplified down to T_{n}. As you can see the T_{n, }what ever n that will answer because 1 and the +1 basically cancel.
3)
a = 1; b = 3/2; f(n) = 1;
n^{log}_{b}^{a }= n^{log}_{3/2}^{1} = n^{0} = 1
f(n) = Θ(n^{log}_{b}^{a}) = Θ(1);
T(n) = Θ(lgn);
Case 2 applies. Therefore, T(n) = Θ(lgn)
Q5. (10points)
Illustrate the operation of PARTITION function on array A=<7, 1, 2, 8, 6, 4, 3, 5>.
X_{1} = 1
X_{8} = 8
Median = n / 2
 Find a pivot element
 Determine its rank
 Prune all elements above (if rank _ median) or below, and recurse on
rest.
7 1 2 8 6 4 3 5
7 1 2 8 6 4 3 5
7 1 2 8 6 4 3 5
1 2 7 8 6 4 3 5
1 2 4 3 7 8 6 5
1 2 4 3 5 7 8 6
1 2 4 3 5 7 8 6
<<partition function>>=
def partition(a, start, end, pivotIndex):
low = start
high = end – 1 # After we remove pivot it will be one smaller
pivotValue = a[pivotIndex]
remove pivot from array
while True:
while low <= high and a[low] < pivotValue:
low = low + 1
while low <= high and a[high] >= pivotValue:
high = high – 1
if low > high:
break
a[low], a[high] = a[high], a[low]
insert pivot into final position and return final position
Q6. (12points)
What are the time complexities of the following programs? Justify.
 Linear Search O(n)
 Binary Search O(logn)
(1)
boolean SearchA(int a[], int n, int X) {
int i;
for (i = 0; i < n; i++) {
if (a[i] == X) return true;
}
return false;
}
Linear Search O(n)
Linear Search Examines each element in a list as it encounters it until a match is found. It is a very simple sort known as sequential sort. It can be used on an unprocessed list. Worst case is that the match is not in the array or it is at the very end which means n comparisons are needed.
Linear Search O(n)
(2)
int SearchB(A[0..N1], value, low, high) {
if (high < low)
return 1; //not found
mid = low + ((high – low) / 2);
if (A[mid] > value)
return SearchB(A, value, low, mid1);
else if (A[mid] < value)
return SearchB(A, value, mid+1, high);
else
return mid; //found
}
Binary Search O(logn)
Binary Search is a divideandconquer search algorithm. It locates the position of an element in the array and checks the middle and eliminates half from the consideration and searches only the other half. If the middle is not the value, which would be a more rare case, it chooses either the upper or lower half. It eliminates half each time it occurs. N > n/2 & n/2 > n/4 & n/4 & n/4 & n/4 > etc
Binary Search O(logn)
Q7. (12points)
Given an array (not sorted). Write an algorithm to determine whether an element occurs more than n/2 in time complexity not more than O(n). Note n is the size of the array.
For example in an array [3, 3, 5, 3, 3, 4, 4, 8, 3, 3, 3, 9], your program will return yes for value 3.
n/2 < Element <= O(n) = T(n)
FINDMAJORITYELEMENT(A)
for i 1 to length[A]
if stack is empty
then add A[i] on the stack
else if A[i] equals top element on the stack
then add A[i] on the stack
else remove the stack
if stack is empty
then return NoMajority
candidate top element on the stack
counter 0
for i 1 to length[A]
if A[i] equals candidate
then counter counter + 1
if counter > [length[A]/2]
then return candidate
return NoMajority
What it does:
It finds the element using the stack by looping through the array. If the stack is empty or the element is the same as the top of the stack, the element gets “add” onto the stack. If the top of the stack is different from it, then we “remove” it off. So we can assume that if an element exists that fulfills our requirement then it will be on the top of the stack at the end. Candidate is the name for the top of the stack and we use it by counting how many times it appears and if it appears more than n/2 we know it’s the MajorityElement.
* Note add = push and remove = pop *
There is a proof that says if the majority element “x” appears i > n/2 times, it gets added on the stack every time it is found in the array or a different element gets removed off. If x is not on the stack at the end of the algorithm then it removed all the other elements from the stack or it was removed by other elements. BUT – there are only n – i < i elements left since x appears > n/2 times. Therefore by contradiction it is proved invalid.
Since it takes constant time adding and removing each element one by one – the running time is Θ (n) = cn = T(n)
Θ (n) < O(n) Since big O is an upper bound or the worst case scenario.
Q8. (16points)
Given an array A of n numbers, write an algorithm to sort all the odd numbers in nondecreasing order at the beginning of the array followed by all the sorted even numbers in nonincreasing order. For example,
Before sorting: 5, 3, 10, 12, 2, 34, 55, 65, 4, 7, 5
After sorting: 3, 5, 5, 7, 55, 65, 34, 12, 10, 4, 2
What’s the complexity of your algorithm?
Complexity is O(n)
Because…
Insertion sort is a simple sorting algorithm, a comparison sort in which the sorted array (or list) is built one entry at a time.
for i <= 1 to length[A]
if A[i] MOD 2 = 0
even[j++] //increment even
else
odd[k++] // increment odd
insertionSort(array odd)
begin
for i := 1 to length[odd] do
Begin
value := A[i];
j := i – 1;
while j >= 0 and A[j] > value do
begin
A[j + 1] := A[j];
j := j – 1;
end;
A[j + 1] := value;
end;
end;
insertionSort(array even)
begin
for i := 1 to length[even] do
begin
value := A[i];
j := i – 1;
while j >= 0 and A[j] > value do
begin
A[j + 1] := A[j];
j := j – 1;
end;
A[j + 1] := value;
end;
end;
begin
for i := 1 to length[odd] do // is odd
begin
print odd[i]
end;
end;
begin
for i=length[odd] : = 1 step 1 do // make even
begin
print even[i]
end;
end;
Therefore…
Complexity is O(n)
———————————————————————————–
void sort(int * a) {
for (int j = 2; j < 10; j++) {
for (int k = 0; k < j; k++) {
if (a[j] < a[k]) {
int temp = a[k];
a[k] = a[j];
a[j] = temp;
}
}
}
for (int i = 0; i < 10; i++) {
cout << a[i] << “\n”;
}
}
void main() {
clrscr();
int a[] = {
5, 3, 10, 12, 2, 34, 55, 65, 4, 7, 5};
sort(a);
}
———————————————————————————–
Complexity is O(n)
Q9. (Extra Credit: 10points)
Suppose you are given an array A[1 .. n] of n distinct integers, sorted in increasing order.
Describe and analyze an algorithm to determine whether there is an index i such that A[i] = i.
(Better than lineartime solution is expected.)
K Ɛ [1..n] with A[1..k] increasing and A[k+1..n] decreasing
SeekK(A, low, high)
i = [(low + high)/2]
if (low == 0 or high == length(A)) then
output One way sequence(either only ascending or descending)
else if (A[i − 1] < A[i] and A[i] _ A[i + 1]) then
output k = i.
else if (A[i − 1] _ A[i] and A[i] _ A[i + 1]) then
call SeekK (i, high)
else
call SeekK (low, i)
endif
endSeekK
We use binary search because it is already sorted.
1.) Use a Binary Search to Locate the A[i] = i
2.) Find the Median
3.) If I < Median : search the A[k+1..n] // ( half array )
4.) If I >= Median : search the A[1..k] // ( half array )
5.) Return i index only when A[i] = i
6.) Recursion => Runtime => T(n) = 2T(n/2) + c = O(lg n).
CSET 3150: Homework 6
24.31
Let the node we are starting be called an initial node. Let a distance of a node Y be the distance from the initial node to it. Dijkstra’s algorithm will assign some initial distance values and will try to improve them stepbystep.
 Assign to every node a distance value. Set it to zero for our initial node and to infinity for all other nodes.
 Mark all nodes as unvisited. Set initial node as current.
 For current node, consider all its unvisited neighbors and calculate their distance (from the initial node). For example, if current node (A) has distance of 6, and an edge connecting it with another node (B) is 2, the distance to B through A will be 6+2=8. If this distance is less than the previously recorded distance (infinity in the beginning, zero for the initial node), overwrite the distance.
 When we are done considering all neighbors of the current node, mark it as visited. A visited node will not be checked ever again; its distance recorded now is final and minimal.
 Set the unvisited node with the smallest distance (from the initial node) as the next “current node” and continue from step 3.
Black Circle Background = Set S
D Values = inside Circles
Arrows = Pi π Values

 B.
 D.
 F.
Black Circle Background = Set S
D Values = inside Circles
Arrows = Pi π Values
 B.
 D.
 F.
G.
25.11
The four steps for developing a dynamicprogramming algorithm to solve a problem are:
 Characterize the structure of an optimal solution.
 Recursively defines the value of an optimal solution.
 Compute the value of an optimal solution in a bottomup fashion.
 Construct an optimal solution from the computed information.
Algorithm
let val q: queue = new_queue()
val visited: vertexMap = create_vertexMap()
fun expand(v: vertex) =
let val neighbors: vertex list = Graph.outgoing(v)
val dist: int = valOf(get(visited, v))
fun handle_edge(v': vertex, weight: int) =
case get(visited, v’) of
SOME(d’) =>
if dist+weight < d’
then ( add(visited, v’, dist+weight);
incr_priority(q, v’, dist+weight) )
else ()
 NONE => ( add(visited, v’, dist+weight);
push(q, v’, dist+weight) )
in
app handle_edge neighbors
end
in
add(visited, v0, 0);
expand(v0);
while (not (empty_queue(q)) do expand(pop(q))
end
SLOWALLPAIRSSHORTESTPATHS(W)
 n ß rows[W]
 L^{(1)} ß W
 for m ß 2 to n–1 do
 L^{(m)} ß ExtendShortestPaths(L^{(m–1)},W)
 return L^{(m)}
SLOWALLPAIRSSHORTESTPATHS
0  ∞  ∞  ∞  1  ∞ 
1  0  ∞  2  ∞  ∞ 
∞  2  0  ∞  ∞  8 
4  ∞  ∞  0  3  ∞ 
∞  7  ∞  ∞  0  ∞ 
∞  5  10  ∞  ∞  0 
0  6  ∞  ∞  1  ∞ 
2  0  ∞  2  0  ∞ 
3  3  0  4  ∞  8 
4  10  ∞  0  5  ∞ 
8  7  ∞  9  0  ∞ 
6  5  10  7  ∞  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
FASTERALLPAIRSSHORTESTPATHS(W)
0  6  ∞  ∞  1  ∞ 
2  0  ∞  2  0  ∞ 
3  3  0  4  ∞  8 
4  10  ∞  0  5  ∞ 
8  7  ∞  9  0  ∞ 
6  5  10  7  ∞  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
0  6  ∞  8  1  ∞ 
2  0  ∞  2  0  ∞ 
0  3  0  4  2  8 
4  2  ∞  0  5  ∞ 
5  7  ∞  9  0  ∞ 
3  5  10  7  5  0 
FASTERALLPAIRSSHORTESTPATHS(W)
 n ß rows[W]
 L^{(1)} ß W
 while m < n–1 do
 L^{(2m)} ß ExtendShortestPaths(L^{(m)},L^{(m)})
 m ß2m
 return L^{(m)}
NODE  Goes TO  And TO 
1  5  
2  4  
3  2  6 
4  1  5 
5  2  
6  2  3 
32.12
You can use a linear search of T.
Assume all characters of P are unique. A mismatch with T and a position I of P – line 4 – NaïveStringMatcher which implies we continue the search from s + i in T giving the linear search through T.
 n ← length [T]
 m ← length [P]
 for s ← 0 to n – m do
 if P[1 . . m] = T[s +1 . . s + m]
 then return valid shift s
The way string matching works is quite simple. You get a string – choose the pattern you are looking for and step through the string by each character recognizing the pattern each time. An example would be as follows:
String: My Moorish dog likes chickens more than monsters
(Ignoring capitals)
Pattern: “mo”
Solution: 3
Mo orish
Mo re
Mo nsters
See very simple!
So if you have a pattern P with all different characters – you can accelerate the run time either by a linear topdown or bottomup search. The runtime will still be O(n) because unfortunately you have to search each character to find the starting character of the pattern and match it to the starting of the string. As we have learned a bottomup approach is usually more efficient since we are using dynamic programming to not solve the same problem, but to use our solution to avoid solving time and wasted time.
32.21
T = 3 1 4 1 5 9 2 6 5 3 5 8 9 7 9 3
15 59 92
5 and 9 get shared as characters.
P = 26
Q = 11
26 = 4 mod 11
Three Spurious Hits
15 = 4 mod 11
59 = 4 mod 11
92 = 4 mod 11
Rest:
31 = 9 mod 11
14 = 3 mod 11
26 = 4 mod 11* We don’t because it is our pattern
65 = 10 mod 11
53 = 9 mod 11
35 = 2 mod 11
89 = 1 mod 11
98 = 10 mod 11
79 = 2 mod 11
93 = 5 mod 11
CSET 3150: Homework 5
Andrew Lechlak
Homework 5
15.21
Solve the matrix via chain order for a specific problem by computing MATRIXCHARINORDER (p) where p = (5, 10 ,3 , 12, 5, 50, 6 ) or you can use the equation:
i/j  1  2  3  4  5  6 
1  0  150  330  405  1655  2010 
2  X  0  360  330  2430  1950 
3  X  X  0  180  930  1770 
4  X  X  X  0  3000  1860 
5  X  X  X  X  0  1500 
6  X  X  X  X  X  0 
The table is created based on the fact that m(i,i) = 0 for all i. This computes m(i,i) for i = 1 to n1.
“Matrix chain multiplication is an optimization problem that can be solved using dynamic programming. Given a sequence of matrices, we want to find the most efficient way to multiply these matrices together. The problem is not actually to perform the multiplications, but merely to decide in which order to perform the multiplications.
We have many options because matrix multiplication is associative. In other words, no matter how we parenthesize the product, the result will be the same. For example, if we had four matrices A, B, C, and D, we would have:
(ABC)D = (AB)(CD) = A(BCD) = A(BC)D = …
However, the order in which we parenthesize the product affects the number of simple arithmetic operations needed to compute the product, or the efficiency. For example, suppose A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix. Then,
(AB)C = (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations
A(BC) = (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations
Clearly the first method is the more efficient. Now that we have identified the problem, how do we determine the optimal parenthesization of a product of n matrices? We could go through each possible parenthesization (brute force), but this would require time O(2n), which is very slow and impractical for large n. The solution, as we will see, is to break up the problem into a set of related sub problems. By solving sub problems one time and reusing these solutions many times, we can drastically reduce the time required. This is known as dynamic programming.”
16.11
Give a dynamicprogramming algorithm for the activityselection problem, based on the recurrence equation in section 16.1 subsection 16.3 in the activity selection section. Set the algorithm compute the sizes c[i,j] as defined above and also produce the maximumsize subset A of activities. Assume that the inputs have been sorted as in equation 16.1 subsection 16.1 in the activity selection section. Compare the running time of your solution to the running time of GREEDYACTIVTIYSELECTOR by:
f_{0}_{ } <= f_{1} <= f_{2} <= … <= f_{n} < f_{n+1}
_{“We are given n activities with each of them being represented by a start time is and finish time fi. Also, two activities i and j are said to be nonconflicting if si ≥ fj or sj ≥ fi. We are required to find the maximum set (S) of nonconflicting activities. ie. there does not exist a solution set S’ for this problem such that S’ > S.”}
_{Sort the set of activities by finishing time (f[i])}
_{S = 1}
_{f = f[i]}
_{for i=1 to n}
_{ if s[i] ≥ f}
_{ S = S U i}
_{ f = f[i]}
_{endfor}
_{That can be written as}
c[i,j] =  0 if S_{ij} = 0,
 max_{i<k<j} { c[i,k] + c[k,j] + 1} if S_{ij } != 0 .
_{ }
Activity Selection is scheduling a resource among several competing activities. This comparison above solves the placement issue. It also proves that the running time of my solution is faster and more efficient than the greedy activity selector.
22.11
Given the adjacencylist representation of a graph the out and indegree of every node can easily
be computed in O(E + V) time.
Directed Graph AdjacencyList
The outdegree of a vertex is the number of outedges or number of edges leaving it. The indegree of a vertex is the number of inedges or number of edges entering it. This is a basic understanding for greedy algorithms.
OutDegree
1 For each u in V do
2 outdeg[u] = 0
3 For each u in V do
4 For each v in Adj[u] do
5 outdeg[u] = outdeg[u] + 1 // count # of edges from u
InDegree
1 For each u in V do
2 indeg[u] = 0
3 For each u in V do
4 For each v in Adj[u] do
5 indeg[v] = indeg[v] + 1 // count # of edges to v
Implying… O(E + V) + O(V) = O(E + V)
AdjacentList = Adj
Directed Graph = G = ( V, E )
Adj is then the array containing the collection of all the vertices, v , that are adjacent to other vertices, u, in that the same graph G, or set of vertices, V, are expressing the edges with the same notation (u,v,) contained in the set of edges, E. We intend to calculate the time it takes for a computer algorithm to return the total number of edges (u,v) entering and then leaving the vertex with an adjacent vertex found at Adj[u].
Using…
for i _ V[u_{0}] to V[u_{n}]
outdeg[u_{i}] = 0
for j _ V[u_{0}] to V[u_{n}]
for k _ V[v_{0}] to V[v_{n}]
Adj[u_{i}] = V( u_{i},v_{i} )
outdeg[u_{i}] = outdeg[u_{i}] + 1
next u
return outdegree[u].total
We can calculate the outdegree using the algorithm above. It returns the number of edges incident, or edges adjacent, from the vertex, u, in a set, V, in the graph G, with the value of outdegree[u] total.
Using…
for i _ V[u_{0}] to V[u_{n}]
outdeg[u_{i}] = 0
for j _ V[u_{0}] to V[u_{n}]
for k _ V[v_{0}] to V[v_{n}]
Adj[v] = V( u_{i},v_{i} )
indeg[v] = indeg[v] + 1
next u
return outdegree[u].total
We can calculate the indegree of the graph, G, nearly the same as the outdegree algorithm. This is done by changing the value of the Adj array to v and making it an indeg array of u instead of outdeg. The Adj[u] is still evaluated by the list of vertex. Initially the value is affixed to the vertex with the least index value such as v sub 0 which the edges entering the vertex then are counted and so on the last vertex in the u array. This is done by replacing the last lines in the first algorithm like shown above to take into account the array of v instead of u and that the indeg is counted, not the outdeg.
The algorithm for indeg would return the number of edges incident, adjacent, to the vertex v in set V and graph G. It will scan every vertex in G and conclude that the sum would conclude the runtime equation implying it to be similar to an insertion sort algorithm being so slow. Runtime => O(n) in which n is the total vertices in graph G(V,E) where the absolute value of V, V, is = V[G] which we used in the pseudo code above. Thus giving us a runtime of O(n) O(V).
Edges check for primarily for existence of incident vertices and then they check direction aka adjacent vertices. It then loops through each vertex accordingly doing the same thus giving the runtime of O(n)
O(V). It checks the entire set E[G] inside of Adj[u] which satisfies the O(m) time restraint in which m is the total edges in the adjacent array of E as the input. Thus giving us a runtime of O( n + m ) ` O( E + V ) So then the sum of the terms of the total cost equations yield : O(V) + O( E + V ) = O( V + E ).
Using the adjacenymatrix representation with graph algorithms we are told that it requires Θ( V^{2 })memory and it also needs a time of Ω( V^{2 }). Since this is not a matrix but an adjacencylist of graphing algorithms which should require less memory and time where G = (V,E) and the difference between V and E should be large. As a result the runtime is O( V + E ) and the memory requirement would be Θ( V + E).
22.22
“BFS is an uninformed search method that aims to expand and examine all nodes of a graph or combination of sequences by systematically searching through every solution. In other words, it exhaustively searches the entire graph or sequence without considering the goal until it finds it. It does not use a heuristic algorithm. From the standpoint of the algorithm, all child nodes obtained by expanding a node are added to a FIFO queue. In typical implementations, nodes that have not yet been examined for their neighbors are placed in some container (such as a queue or linked list) called “open” and then once examined are placed in the container closed.”
def dfs(root, visited = None,
preorder_process = lambda x: None):
“””
Given a starting vertex, root, do a depthfirst search.
“””
from collections import deque
to_visit = deque()
if visited is None: visited = set()
to_visit.append(root) # Start with root
while len(to_visit) != 0:
v = to_visit.pop()
if v not in visited:
visited.add(v)
preorder_process(v)
to_visit.extend(v.neighbors)
The chart below shows…
BFS label each vertex by the length of a shortest path (in terms of number of edges) from the start vertex. That is what makes it special and it is what makes it a graph algorithm. The graph algorithm is in this case the shortest path.
BREADTH FIRST SEARCH (G, S)
Input: A graph G and a vertex.
Output: Edges labeled as discovery and cross edges in the connected component.
Create a Queue Q.
ENQUEUE (Q, S) // Insert S into Q.
While Q is not empty do
for each vertex v in Q do
for all edges e incident on v do
if edge e is unexplored then
let w be the other endpoint of e.
if vertex w is unexpected then
– mark e as a discovery edge
– insert w into Q
else
mark e as a cross edge
22.32
Show how depthfirst search works on the graph of Figure 22.6. Assume that the
for loop of lines 57 of the DFS procedure considers the vertices in alphabetical
order, and assume that each adjacency list is ordered alphabetically. Show the
discovery and finishing times for each vertex, and show the classification of each
edge.
Algorithm
DFS(G)
for each vertex u E V[G]
do color[u] c WHITE
n[u] c NIL
time t O
for each vertex u E V[G]
then DFSVISIT(U)
do if color[u] = WHITE
DFSVISIT(U)
color[u] t GRAY
time c time + 1
d[u] c time
for each u E Adj[u]
D White vertex u has just been discovered.
D Explore edge (u, u).
do if color[u] = WHITE
then n[u] t u
DFS VISIT( U)
color[u] t BLACK
f [u] t time t time +1
D Blacken u; it is finished.
u  v  W  x  y  Z  
PI  0  u  0  y  v  W 
D  1  2  4  3  
F  
color  gray  gray  white  gray  gray  white 
Every vertex v visited by DFSVisit( v ) is BLACK (i.e. fully explored or finished).
DFS places discovered vertices in LIFO stack, exploring vertices as discovered.
DFS(G)
 for each u in V do
 color[u] = WHITE
 p[u] = NIL
 end for
 time = 0
 for each u in V do
 if color[u] == WHITE then
 DFS_VISIT(u)
 end if
 end for
 return
DFS_VISIT(u)
 color[u] = GRAY
 time = time+1
 d[u] = time
 for each v in Adj[u] do
 if color[v]==WHITE then
 p[v] = u
 DFS_VISIT(v)
 end if
 end for
 color[u] = BLACK
 time = time+1
 f[u] = time
 return
Vertex  Adj  Discover  Finish 
Q  S T W  1  16 
R  U Y  17  20 
S  V  2  7 
T  X Y  8  15 
U  Y  18  19 
V  W  3  6 
W  S  4  5 
X  Z  9  12 
Y  Q  13  14 
Z  X  10  11 
Q à S à T à W
R à U à Y
S à V
T à X à Y
U à Y
V à W
W à S
X à Z
Y à Q
Z à X
Edge  Classification 
(Q,S)  tree 
(Q,T)  tree 
(Q,W)  forward 
(R,U)  tree 
(R,Y)  cross 
(S,V)  tree 
(T,X)  tree 
(T,Y)  tree 
(U,Y)  cross 
(V,W)  tree 
(W,S)  back 
(X,Z)  tree 
(Y,Q)  back 
(Z,X)  back 
When doing the search, note that the shorter the line on the graph representation of the algorithm the quicker the sort finishes.
White: Tree Edge
Gray: Back Edge
Black: Forward / Cross Edge
22.37
“Depthfirst search is an algorithm for traversing or searching a tree, tree structure, or graph. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. Given a connected, undirected graph, a spanning tree of that graph is a sub graph which is a tree and connects all the vertices together. A single graph can have many different spanning trees. We can also assign a weight to each edge, which is a number representing how unfavorable it is, and use this to assign a weight to a spanning tree by computing the sum of the weights of the edges in that spanning tree. A minimum spanning tree (MST) or minimum weight spanning tree is then a spanning tree with weight less than or equal to the weight of every other spanning tree. More generally, any undirected graph (not necessarily connected) has a minimum spanning forest, which is a union of minimum spanning trees for its connected components.”
D  F  
W  1  6 
U  2  3 
V  4  5 
Using DepthFirst Search you can see there is a path from u to v in G. The edges are produced in depthfirst forest implying that d[u] < d[v] in depthfirst search but v is not a descendant of u in the forest.
23.11
Theorem 23.1 shows this. Let A be the empty set and S be any set containing u but not v.
G_{A} = (V,A)
Let G=(V,E) be connected, graph G with a real value weight function, w, defined on E. Let A be a subset of E that is included in a minimum spanning tree G. At any point let the execution of the algorithm, graph G sub A = (V,A) AKA G_{ A} = (V,A) become a forest and each of the components in G sub A get connected to become a tree.
Let a safe edge between the two forests (possibly only 1 incident, vertex) with a vertex u and the other v. The edge being (u,v) has a minimum weight in that A will belong to the minimum spanning tree graph, G. G_{ A} = (V,A).
Question 1
Kruskal’s minimum spanning tree algorithm starts with the empty graph and then slects edges from E according its main rule: repeatedly adding the next lightest edge that doesn’t produce a cycle. This means it constructs the tree edge by edge and it does so to avoid cycles so that is will choose the edge that is the cheapest at that moment. This makes it a Greedy Algorithm since it works in the here and now and doesn’t take into account the full picture but only a snapshot of the present. Every decision is makes is the one with the most obvious and immediate advantage. We start with the empty graph and add edges in increasing order of weight – any tie would result in last in – highest value.
I.E.
The first two succeed, but B D would produce a cycle if included so we remove / ignore it and move on. The final tree would cost 14, the minimum. The Kruskal method follows from a certain cut property that in general justifies the whole entire set of minimum spanning tree algorithms.
A Tree is an undirected graph that is connected and acyclic that makes trees so useful with their structure such as a tree with n nodes has n – 1 edges. Just note that a particular edge (u,v)
Any connected graph G = (V,E) where E = V – 1 is a tree. G is acyclic when you can remove a one edge from the set if a cycle is present. It terminates with some graph G’ = (V,E’) which means the G is acyclic and then also implying it is connected. In a tree, any two nodes only have one path between them for every two paths the union in these paths contains a cycle. There is a good example below.
The addition of the e (dotted line) to T (solid line) produces a cycle. This cycle must have at least one other edge shown below across the cut(S,VS). This is the same as the first example where BD, the third set in the set, would produce a cycle and it had to be removed in order for it to approve with the Kruskal Method.
CSET 3150: Homework 4
Andrew Lechlak
CSET 3150 – 002
Homework 4
10.11
Original
Push(S,4)
4 
Push(S,1)
4  1 
Push(S,3)
4  1  3 
Pop(S)
4  1 
Push(S,8)
4  1  8 
Pop(S)
4  1 
10.12
Two stacks can be implemented in a single array without overflow if they grow towards the middle. So instead of starting the middle and growing further out, they grow inward. Operation O(1)
Stacks = S1 and S2
Array = A[1…n]
S1 in A[1…n] which stores elements from index 1 to n
S2 in A[n…1] which stores elements from index n to 1
This way it grows inward. Either End to Start or Start to End
The top pointer changes as more and more is moved onto the stack
It checks to see if the tops of S(1) ==top of S(2) before pushing more elements onto either stack
If top S(1) == top S(2) the array is full and all positions (n) are full
No elements can be added then (If top S(1) == top S(2))
Operation O(1)
10.14
Enqueue and Dequeue are operations that detect underflow and overflow. Each operation takes O(1) time.
Enqueue (Q,x)
if (head(Q) == tail(Q) + 1)
then return overflow error
else
Q[tail(Q)] = x
if (tail(Q) == length(Q))
then tail(Q) = 1
else tail(Q) = tail(Q) + 1
Dequeue (Q)
if (head(Q) == tail(Q) == 1)
then return underflow error
else
x = Q[head(Q)]
if (head(Q) == length(Q))
then head(Q) = 1
else head(Q = head(Q) + 1
return x
10.27
A Θ(n) time nonrecursive procedure to reverse a singly linked list of n elements with constant storage only on the list is:
LISTREVERSE (L)
p ← head[L]
if p = NIL
return error “empty list”
q ← next[p]
r ← NIL
while q ≠ NIL
do next[p] ← r
r ← p
p ← q
q ← next[p]
next[p] ← r
head[L] ← p
10.41
11.22
Demonstrate the insertion of the keys 5, 28, 19, 15, 20, 33, 12, 17, 10 into a hash table with collisions resolved by chaining. Let the table have 9 slots, and let the hash function be h(k) = k mod 9
28 mod 9 =1
19 mod 9 = 1
10 mod 9 = 1
20 mod 9 = 2
12 mod 9 = 3
5 mod 9 = 5
15 mod 9 = 6
33 mod 9 = 6
17 mod 9 = 8
11.34
A = ((SQRT.5) – 1) / 2 = .6180339887
M = 1000
*note mod 1 gets rid of everything but decimal*
61: 700
1000(61 * . 6180339887 mod 1)
1000(37.7000733 mod 1)
1000(.7000733)
700.0733
62: 318
1000(62 * . 6180339887 mod 1)
1000(38.3181073 mod 1)
1000(.3181073)
318.1073
63: 936
1000(63 * . 6180339887 mod 1)
1000(38.9361413 mod 1)
1000(.9361413)
936.1413
64: 554
1000(61 * . 6180339887 mod 1)
1000(39.5541753 mod 1)
1000(.5541753)
554.1753
65: 172
1000(61 * . 6180339887 mod 1)
1000(40.1722093 mod 1)
1000(.1722093)
172.2093
*note* to measure more values you could calculate the standard deviance across the hashed locations.
12.21
c Because 912 cannot be encountered when a left path is taken from 911
621 is right of 347 – therefore every node under 621 must also be to the right – 299 is not since you pass 347 and 299 is too far in the opposite direction.
e Because 299 cannot be encountered after taking a right path from 347
240 is left of 911 – therefore every node under must also be less than 911. 912 > 911.
x is a node with two children
In the in order tree walk, the x’s left sub tree precedes x and the nodes in the right sub tree follow x. This implies that x then has its predecessor in the left sub tree and a successor in the right sub tree.
y is now x’s successor
y cannot have a left child because it would come between x and y – if any node were to come between them in the in order walk, y would be not be the successor. It comes after x because it is in x’s right sub tree and it is before y because it is in y’s left sub tree.
Therefore x’s predecessor has no right child.
13.13
If we color the root of a relaxed redblack tree black but make no other changes, then the resulting tree is a redblack tree. No blackheights change during this.
CSET 3150: Homework 3
CSET 3150 Fall 2009
1
HW3 Solution
1. Exercise 8.24 (page 170)
2. Exercise 8.34 (page 173)
3. Exercise 9.24 (page 189)
4. Exercise 9.33 (page 192)
Note: Questions 14 are 10 points each.
1. Exercise 8.24 (page 170)
Compute the C array as is done in counting sort. The number of integers in the range [a..b]
is C[b] – C[a1], where we interpret C[1] as 0.
2. Exercise 8.34 (page 173)
Treat the numbers as 2digit numbers with radix n. Each digit ranges from 0 to n1. Sort
these 2digit numbers with radix sort.
There are 2 calls to counting sort, each taking time Θ(n+n) = Θ(n), so that the total time
is Θ(n).
3. Exercise 9.24 (page 189)
We will get the worstcase running time O(n2) by always picking the largest number as
the pivot for PARTITION. In particular, the first pivot is 9, the second is 8, and so on.
4. Exercise 9.33 (page 192)
A modification to quicksort that allows it to run in O(n lg n) time in the worstcase uses
the deterministic PARTITION algorithm that was modified to take an element to partition
around as an input parameter.
BESTCASEQUICKSORT(A, p, r)
If p < r
then i ⌊(r – p + 1)/2⌋
x SELECT(A, p, r, i)
q PARTITION(x)
BESTCASEQUICKSORT(A, p, q1)
BESTCASEQUICKSORT(A, q+1, r)
Because BESTCASEQUICKSORT always recurses on subarrays that are at most half
the size of the original array, the recurrence for the worstcase running time is
T(n) 2T(n/2) + (n) = O(n lg n).
CSET 3150: Homework 2
 10.31
 10.32
Each list element is an object that occupies a contiguous subarray of length 2 within the array. The two fields are key, next corresponds to offsets 0 and 1 respectively. A pointer to an object is an index of the first element of the object. We keep the free objects in the same array, which we call the free list. The free list uses the next, which store the next pointers within the list. The head of the free list is held in the global variable free.
AllocateObject()
{
if (free = NIL)
then error ” Out of Space”
else
x=free
free = next[A[free ]]
return x
}
DeleteObject(x)
{
next[A[x]] = free
free=x
}
 41
 10.44
printtree(root)
{
if (root ¹ NIL)
{
print(root.key)
printtree(root.leftchild)
printtree(root.rightsibiling)
}
}
 101
Comparison among lists
Unsorted single linked list  Sorted Single linked list  Unsorted Doubly linked list  Sorted
Doubly linked list 

SEARCH (L, K)  Q(n)  Q(n)  Q(n)  Q(n) 
INSERT (L, X)  Q(1)  Q(n)  Q(1)  Q(n) 
DELETE (L, X)  Q(n)  Q(n)  Q(1)  Q(1) 
SUCCESSOR (L, X)  Q(n)  Q(1)  Q(n)  Q(1) 
PREDECESSOR (L, X)  Q(n)  Q(n)  Q(n)  Q(1) 
MINIMUM (L)  Q(n)  Q(1)  Q(n)  Q(1) 
MAXIMUM (L)  Q(n)  Q(n)  Q(n)  Q(1) 
 12.21
c and e
c Because 912 cannot be encountered when a left path is taken from 911
e Because 299 cannot be encountered after taking a right path from 347
 12.34
TreeDelete handles the deletion of a node z with two children by redirecting the pointers from p[z], left[z] and right [z]to point to z’s successor. This replaces the copying of the data from the successor.
Tree Delete(T, z)
{
if left[z] = NIL or right[z] =NIL
then y ¬ z
else y ¬ TREESUCCESSOR(z)
if left[y] ¹ NIL
then x ¬ left[y]
else x ¬ right[y]
if x ¹ NIL
then p[x] ¬ p[y]
if p[y] = NIL
then root[T] ¬ x
else if y = left[p[y]]
then left[p[y]] ¬ x
else right[p[y] ¬ x
if y ¹ z
left[y] ¬ left[z]
right[y] ¬ right[z]
p[left[z]] ¬ y
p[right[z] ¬ y
if z = left[p[z]]
left[p[z]] ¬ y
else
right[p[z]] ¬ y
}
 12.35
False
Below is a counterexample
Deleting 1 then 2
Deleting 2 then 1

 13.11
Tree with blackheight 2
Tree with blackheight 3
Tree with blackheight 4
 13.15
Show that the longest simple path from a node x in a redblack tree to a descendant leaf has length at most twice that of the shortest path from node x to a descendant leaf.
The shortest simple path from any node x will be the black height of the tree with x as root(i.e., bh(x)). There could be many branches in the tree; each branch is a combination of red and black nodes. The longest simple path in any tree will be that path which has the total number of nodes = (Property 4) bh(x) + max possible number of red nodes. The maximum possible number of red nodes will be equal to the bh(x), as to satisfy the redblack property., for each red node, its children has to be clack (no two consecutive red nodes in a path). Hence the max height of the tree could be 2 * bh(x), twice the shortest simple path.
 13.23
Let a, b, and c be arbitrary nodes in subtrees a, b, and g, respectively, in the left tree of Figure 13.2. How do the depths of a, b, and c change when a left rotation is performed on node x in the figure?
The depth of a increases by +1
The depth of b remains the same
The depth of c changes by –1
 13.32
Show the redblack trees that result after successively inserting the keys 41, 38, 31, 12, 19, and 8 into an initially empty redblack tree.
Insert 41 Insert 38 Insert 31
Insert 12
Insert 19
Insert 8
 Show the redblack trees that result after successively inserting the keys 5, 10, 15, 25, 20, and 30 into an initially empty redblack tree.
Insert 5 Insert 10 Insert 15
Insert 20
Insert 25 Insert 30
 13.43. In exercise 13.32 (problem 12), you found the redblack tree that results from successively inserting the keys 41, 38, 31, 12, 19, and 8 into an initially empty tree. Now show the redblack trees that result from the successive deletion of the keys in the order 8, 12, 19, 31, 38, and 41.
Delete 8 Delete 12
Delete 19 Delete 31
Delete 38 Delete 41 – Empty Tree
 Show the redblack tree that results from successively deleting the keys 30, 25, 20, 15, 10, and 5 from the final tree in problem 13.
Delete 30 Delete 25
Delete 20 Delete 15
Delete 10 Delete 5: Empty Tree
 13.47
Suppose that a node x is inserted into a redblack tree with RBINSERT an then immediately deleted with RBDELETE. Is the resulting redblack tree the same as the initial redblack tree? Justify your answer.
No. The tree after insertion and a deletion of the same node may or may not be different.
Let us insert and delete node 15 into the following tree:
 11.22
Demonstrate the insertion of the keys 5, 28, 19, 15, 20, 33, 12, 17, and 10 into a hash table with collisions resolved by chaining. Let the table have 9 slots, and let the hash function be h(k) = k mod 9.
 11.31
Suppose we wish to search a linked list of length n, where each element contains a key k along with a hash value h(k). Each key is a long character string. How might we take advantage of the hash values when searching the list for an element with a given key?
First compute the hash value for the given key. For each list element, perform the string comparison only after verifying that the hash value for the given key is the same as the one stored in the list element.
 11.41
Consider inserting the keys 10, 22, 31, 4, 15, 28, 17, 88, and 59 into a hash table of length m=11 using open addressing with the primary hash function h’(k) = k mod m. Illustrate the result of inserting these keys using linear probing, using quadratic probing with c_{1} = 1and c_{2} = 3, and using double hashing with h_{2}(k) = 1 + (k mod (m1)).
Linear probing Quadratic Probing Double Hashing
 11.44
Consider an openaddress hash table with uniform hashing. Give upper bounds on the expected number of probes in an unsuccessful search and on the expected number of probes in a successful search when the load factor is and when it is .
Theorem 11.6. Given an open address hash table with load factor , the expected number of probes in an unsuccessful search is at most , assuming uniform hashing.
a =, then the upper bound on the number of probes = = 4 probes
a = , then the upper bound on the number of probes = = 8 probes
Theorem 11.8. Given an open address hash table with load factor , the expected number of probes in a successful search is at most , assuming uniform hashing and assuming that each key in the table is equally likely to be searched for.
a = . = 1.85 probes
a = . = 2.38 probes
CSET 3150: Homework 1
CSET 3150 Fall 2009
1
HW1 Solution
1. Show that
a. 5n2=O(n2)
b. n2+10n=O(n2)
c. n=O(n2)
d. n8=Ω (n6)
e. n≠Θ(5n2)
f. 6n2+n≠Ω(n3)
g. an=O(bn), b>a>1
h. lgn=O(n)
2. Exercise 1.23 (page 13)
3. Exercise 2.13 (page 21)
4. Exercise 2.23 (page 27)
5. Exercise 3.11 (page 50)
6. Exercise 3.14 (page 50)
Note: 16 points for Question 1, Question 26 10 points each.
1.
Hints: Follow the definitions of (O, Ө, Ω)notations to show. For most of the cases, you
just need to find constants c, n0 > 0, certain inequality can hold for all n ≥ n0
Refer to solution to 6 (Exercise 3.14) for more examples.
a. 5n2=O(n2)
To show this, we must find constants c, n0 > 0, such that
5n2 ≤ cn2 for all n ≥ n0,
c = 6, n0 = 1 satisfy the definition.
b. n2+10n=O(n2)
To show this, we must find constants c, n0 > 0, such that
n2 + 10n ≤ cn2 for all n ≥ n0,
c = 11, n0 = 1 satisfy the definition.
ANTH 2800 – Reading Response 12
“Applied anthropology refers to the application of method and theory in anthropology to the analysis and solution of practical problems. In as much as anthropology traditionally entails four subdisciplines–biological (a.k.a. physical), cultural, linguistic, and archaeological anthropology–the practical application of any of these subdisciplines may properly be designated “applied anthropology”.
We have studied the cultural and biological in great detail. We have learned how the ethnographic fieldwork studies these subdisciplines.
In the readings, the article on the war in Iraq was a personally interesting one for me. I liked knowing that there were people, specifically anthropologists over in Iraq fighting alongside and aiding our soldiers. I have friends and family over there and to know that someone there has the ability know, learn, and relate with the natives is comforting. It could be easy to insult or accidently offend one of the natives by doing something we normally do without knowing that it was a sensitive subject to the them. They were also deployed to stop unnecessary fighting which could have been this situation.
It is also said that the anthropologists have been recruited for evil or less than peaceful intentions. They are used to find intelligence and use the natives to gain access to positions and better options for killing. Human terrain mapping has also been called an enabler for chain killing. I have mixed feelings on this. I believe they should focus on safety first, but the article made it read like it was more of genocide than safety regulations.
Prisoners have been forcibly studied. The native people have been strongly encouraged with guns in a threatening manner to be studied. It is said that these same people could have also helped to prevent the war in the beginning. This is a stretch in my opinion. If it could have prevented wars, it would have needed to start a long time ago with conflicts that were not thought to be about the war, but really increase hostilities. There is a good chance conflicts could have been avoided, but it would have needed to start a while ago.
The military is also using these people for preparing for defense and for counter intelligence. By having these people study other cultures and do their fieldwork there, they can prepare the US for terrorism, defense, and keep then informed in occurrences in other countries. They do this in an inconspicuous way so that normal people, or even “important” people can be fooled by the people whom integrated into their society.
ANTH 2800 – Reading Response 11
“Globalisation (or globalization) describes a process by which regional economies, societies, and cultures have become integrated through a globespanning network of communication and trade. The term is sometimes used to refer specifically to economic globalization: the integration of national economies into the international economy through trade, foreign direct investment, capital flows, migration, and the spread of technology.[1] However, globalization is usually recognized as being driven by a combination of economic, technological, sociocultural, political, and biological factors.[2] The term can also refered to the transnational circulation of ideas, languages, or popular culture through acculturation.”
It as interesting to see the comparison of globalization from North to South. The fact that people feel the North is better off – wealthier and more prosperous over all is due to the fact that they have industrialized there seems reasonable. The part that works, is the fact that they all work together: investors, buyers, sellers, and consumers. If they could not find a harmony, then it would turn out poorly for everyone. The South is unfortunately not where the industrialization occurred. They did not have the means for factories to produce what the North was. They had the concepts and designs, but they did not have the money to support it.
This seems very similar to the North and South of the United States at least during the Civil war era. The North was a highly industrialized region, while the south was more agricultural and cash crop heavy. The problem is, if the cash crops are not bought, there is no cash. The technology in the South came from the North. The South was dependent on the North for it. So when the South wanted to separate from the North and form its Confederacy, it could not compete with the North and its technology. They did not have the finances to compete.
It was interesting to see how the South is within the region of media, movies, and advertisement, but it is not ready financially or culturally for the action. The action is the ability to transform its economy into a global market. It may receive all of the information and important updates that it needs to use, but it may still lack the resources or initial economic payment to get off of the ground. They did not have all of the investment options we have today.