Skip to content

Instantly share code, notes, and snippets.

@bitwiser
Created March 6, 2014 06:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save bitwiser/9383420 to your computer and use it in GitHub Desktop.
Save bitwiser/9383420 to your computer and use it in GitHub Desktop.
Methods of analysis
To be able to predict the time without having to look at the details of the input, we measure it as a function of the length of the input. Here x has basically constant length (depending on what an item is) and the length of L is just the number of items in it.
So given a list L with n items, how many comparisons does the algorithm take? Answer: it depends.
We want an answer that doesn't depend. There are various ways of getting one, by combining the times for different inputs with the same length.
Worst case analysis -- what is the most comparisons we could ever see no matter how perverse the input is?
time_wc(n) = max time(I)
(input I of size n)
Best case analysis -- what is the fewest comparisons the algorithm could take if the input is well behaved?
time_wc(n) = min time(I)
(input I of size n)
Average case analysis -- how much time would the algorithm take on "typical" input?
We assume that each input I of size n has a probability P[I] of being the actual input and use these proabilities to find a weighted average:
time_avg(n) = sum P[I] time(I)
These distinctions didn't make sense with Fibonacci numbers because the time there was always a function of n, but here they can give different answers (we'll see with sequential search).
Analysis of sequential search
The best case for sequential search is that it does one comparison, and matches X right away.
In the worst case, sequential search does n comparisons, and either matches the last item in the list or doesn't match anything.
The average case is harder to do. We know that the number of comparisons is the position of x in the list. But what is typical position of x?
One reasonable assumption: If x is in the list, it's equally likely to be anywhere in it. so P[pos] = 1/n.
average number of comparisons
n 1
= sum - . i
i=1 n
1 n
= - sum i
n i=1
= (n+1)/2.
But if x is not in the list, the number of comparisons is always n.
So finding something takes half as long as not finding it, on average, with this definition of "typical".
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment