I was so confused by this when I learned it after first hearing about big-O notation. Like why are we only talking about the upper bound? Isn't the lower bound at least as, if not more, important? (spoiler: yes) I'm glad you wrote this since it reinforces that understanding.
> Isn't the lower bound at least as, if not more, important? (spoiler: yes)
I can't think of any situations where I care about the lower bound.
I mainly care about the upper bound and the average. Beyond that some percentile statistics are good.
And by "the" upper bound, I mean a tight upper bound for it to be useful. But you want a tight lower bound too, or it'll just be O(1) and that has no value at all.
And many algorithms have quick exit situations that give them a tiny lower bound and make big-theta invalid.
(Unless by "lower bound" you actually mean "lowest/tightest upper bound"?)
Yes what you're describing is theta - both an upper and lower bound (within constant factor).
Unless we're talking about 'we need to do this within O(...)', it is generally what we mean (as software engineers, not math/CS academics) even when we say O.
Because as you point out, it would be stupid to, say, file a bug report that some function is O(n^100) and too slow if it's also O(n^2) - both are correct (any time the latter is) but it's not really helpful to say the former. Theta effectively says you've found the smallest O.
> Yes what you're describing is theta - both an upper and lower bound (within constant factor).
No, I'm not.
A tight upper bound is only sometimes the same thing as theta.
Insertion in a vector or a hash table is best case 1, average case 1, worst case n. Quicksort is average case nlogn, worst case n^2. Bubble sort is best case n, average case n^2, worst case n^2.
None of those algorithms are bound above and below by the same function.
When Theta exists, it does imply that the upper bound is tight. Which is the feature that's actually important. But Theta talks even more about the lower bound, which rarely matters. Theta is not what you're actually looking for the vast majority of the time.
I still can't think of a single situation where I care about "lower bound within constant factor". I care about average time, I care about upper bounds, occasionally I want an algorithm where the time is always exactly the same for a particular n. But not asymptotic lower bound.
Preventing smartass answers for "upper bound" is a good goal, but the way you do that is not by switching to Theta.
For this purpose, when you say 'best case'/'average case'/'worst case', those are three different functions. Or normally/in general we'd talk about worst case.