At the start of the for loop, sum contains the sum of values in the subarray A[1:i-1].
Initialization: i=1, and sum is 0, so it contains the sum of the empty array A[1:0].
Maintenance: sum increases by A[i]. So If it held A[1:i-1], then it now holds the sum of all values in the subarray A[1:i]. Incrementing i for the next iteration of the for loop then preserves the invariant.
Termination: The loop exits when i is equal to n+1. Substituting this for i in the loop in variant, we get that sum holds the sum of the values in the subarray A[1:n+1-1] = A[1:n].
Descending sort is here. The comparison operator is just changed to <=
.
Find(A, x)
index = nil
for i = 1 to A.length
if A[i] == x
index = i
break
return index
Loop Invariant: At the start of the for loop, A[1:i-1] does not contain the value x.
Initialization: At the start of the for loop, A[1:0] is empty, so it does not contain x.
Maintenance: Suppose i = 3 in an array A of size 5. Suppose A[1:i-1] = A[1:2] does not contain x. The loop executes, and it does not exit early on the break statement. So A[3] did not equal x. Now, i is 4, prior to the start of the next iteration of the for loop. So A[1:4-1], or A[1:3], the combination of A[1:2] and A[3], does not contain x. Incrementing i for the next iteration of the for loop preserves the loop invariant.
Termination: Suppose the for loop exited because
A[i] was equal to x. Then we will capture i in index,
and return it, such that A[index] is equal to x.
Suppose the loop terminated because i was equal
to A.length + 1. Using the loop invariant, we have
A[1:A.length+1-1] does not contain the value x,
or A[1:A.length] does not contain the value x.
index
will hold the value nil, and we'll return it.
Here is the code for adding two n-bit binary integers.
SelectionSort(A, n)
for i = 1 to n-1
min = nil
minJ = nil
for j = i to n
if min == nil || A[j] < min
min = A[j]
minJ = j
temp = A[i]
A[i] = min
A[minJ] = temp
Loop invariant: At the start of the outer loop, A[1:i-1] contains the i-1 smallest values of A, and they are sorted.
Initialization: A[1:-1] is the empty array, so it holds the smallest 0 values of A, and it is sorted.
Maintenance: Suppose n=5, and i=3. At the start of the outer loop, A[1:2] contains the 2 smallest values of A, and it is sorted.
The inner loop will loop over A[3:n] and find the smallest value in it. This value is greater than or equal to all of the values in A[1:2], by virtue of our assumption. The algorithm will then swap the value it found with A[3]. Then the loop will be ready for its next iteration. At this point, i=4, and A[1:3] holds the 3 smallest values of A, and they are sorted. So incrementing i for the next iteration of the outer loop preserves the loop invariant.
Termination: When i=n, the outer loop terminates. The loop invariant gives us thatA[1:i-1]=A[1:n-1] contains the smallest n-1 elements of the array, and they are sorted. A[n] is greater than or equal to these elements. So A is sorted.
Suppose A is sorted in descending order, so that the if statement body inside the inner for loop executes all the time. This is the worst case. The if statement will execute:
times, where
This if statement's cost dominates the running time of the
algorithm. Thus the worst-case running time of SelectionSort
is
The best case is no better, because the inner for loop executes
a very similar number of times to the if statement. So the best-case
running time is also
The worst case running time of the linear search is θ(n),
as the cost of the for loop body is
To think about the average case, let us consider searching an array A of size
How often A[i] is the element we sought | Number of for loop body executions | |
---|---|---|
1 | 2/10 | 1 |
2 | 2/10 | 2 |
3 | 2/10 | 3 |
4 | 2/10 | 4 |
5 | 2/10 | 5 |
The weighted average of the number of times the for loop ran is:
So the body of the for loop on average executed 3 times. The body of the for loop runs a number of times equal to a fraction of the maximum possible,
We conclude that the running time of the average-case and worst-case running time of linear search are the same.
Since sorting algorithms don't run in linear time except in the best case, you can add a linear-time check at the start of any sorting algorithm to see if the input A is already sorted. This will cost θ(n), and thus not disrupt the algorithm's original running time.
returnEarly = true
for i = 2 to n
returnEarly = returnEarly && A[i-1] <= A[i]
if returnEarly
return
I think what the problem is asking is what if we test for p == r on line 1, instead of p >= r. I think it is written incorrectly to say we're going to test for p != r. That would return immediately even when there is legitimate sorting to be done.
So, suppose MergeSort(A,p,r) is called initially with MergeSort(A,1,n),
Consider a call of MergeSort(A, k, k). This will return immediately.
Consider a call of MergeSort(A, k, k+1). This results in the following calculation of
Which will result in calls to MergeSort(A,k,k) and MergeSort(A,k+1,k+1), both of which will return immediately since p == r for these.
Finally, consider the final possibilility of MergeSort(A,k,k+j),
So this results in calls to MergeSort(A,k,k+1) and MergeSort(A,k+2,k+j) where
MergeSort(A,k,k+j) with j > 1
All of this is to say that MergeSort(A,k,k+j) with j >= 0 ensures no call to MergeSort(A,p,r) will ever be made with p < r. MergeSort(A,1,n) with n >= 1 is well-examined by the above cases.
This solution needs to consider situations of
Loop Invariant: At the start of the while loop, A[p:k-1] is a sorted subarray, containing the sorted elements from subarrays L[0:i-1] and R[0:j-1].
Initialization: i=0 and j=0, k=p. So the empty subarray A[p:p-1] contains the sorted elements from the (also) empty arrays L[0:-1] and R[0:-1].
Maintenance: Suppose p=5, k = 8, i=1, and j=2, n_L=4, and n_R=4. This means A[p:k-1]=A[5:7] contains the sorted elements from L[0:0] and R[0:1].
Suppose L[1] < R[3]. Then A[8] will be assigned L[1], and i and k will be incremented to 2 and 9, respectively. Then A[5:8] will contain A[5:7] combined with A[8]=L[1], and it is sorted. It contains all elements from L[0:1] and R[0:1]. So incrementing k for the next iteration of the for loop preserves the invariant, in this case.
Suppose L[1] >= R[2]. Then A[8] will be assigned R[2], and j and k will be incremented to 3 and 9, respectively. Then A[5:8] will contain A[5:7] combined with A[8]=R[2], and it will be sorted. It contains all elements from L[0:0] and R[0:2]. So incrementing k for the next iteration of the for loop preserves the loop invariant in this case.
So regardless of whether L[1] < R[3] or not, incrementing k for the next iteration of the for loop preserves the loop invariant.
Termination: Either i = n_L or j = n_R. If the former, the loop invariant gives us that A[p:k-1] contained the sorted elements of L[0:n_L-1] and R[0:j-1]. Lines 20 to 23 will not execute, but 24-27 will. These will append (n_R-j) elements to A[p:k-1] (for p=5 and k=11, A[5:10]), and then (for n_R-j=4-2=2) k will equal 13, and A[p:k-1]=A[5:13-1]=A[5:12] will contain the sorted elements from L and R. Suppose the latter, the loop invariant gives us that A[p:k-1] contained the sorted elements of L[0:i-1] and R[0:n_R-1]. Lines 24-27 will not execute, but 20-23 will. These will append (n_L-i) elements to A[p:k-1] (for p=5, k=10, A[5:9]), and then (for n_L-i=4-1=3) k will equal 13, and A[5:k-1]=A[5:12] will contain the sorted elements from L and R. So regardless of which of the two conditions caused the while loop of lines 12-18 to exit, A[5:12] will contain the sorted elements of L and R.
We see what the base case of T(2) is equal to:
Now we assume
Here is the code for the recursive InsertionSort.
Its worst-case running time is
The binary search code is here.
The size of the input to find
on any given call to it is find
, we take the average of p and r (and name it q), and in the worst case call find
again with either (p,q-1) or (q+1, r).
In the first case, the next size of the input will be (we drop respect for the floor and minus 1):
So the size of the input has been cut in half. If we choose (q+1, r), we get:
So the input size is cut in half no matter which of the two possible routes the code takes. You can draw a recursion tree, which will have
No, the binary search would not improve things. Even though it could find where the key needs to be inserted in
The code for finding the two elements that sum to x is here.
It uses MergeSort at the outset, costing