Proof: let's take it as granted log(n!) = O(n log n). It remains to show that log(n!) = Omega(n log n).
log(n!) = sum of log i for i from 1 to n
>= sum of log (n/2) for i from n/2 to n (dropping half the terms (all nonnegative) and under-approximating the rest; technically, we need to floor or ceil n/2, but it doesn't matter)
= (n/2) log (n/2)
= (n/2) (log n - log 2)
= (1/2) n log n - n log 2 / 2
Hopefully, I don't need to bother proving that c(n log n) + kn = Theta(n log n) for constants c and k. We've shown that log n! is Omega(n log n) and O(n log n), so we have log n! = Theta(n log n) by definition.
So, the root of this thread is wrong (and so is the parent). Furthermore, the fact that log(n!) is smaller than n log n by a constant is irrelevant when comparing log(n!) to Theta(n log n). What would not be meaningless is comparing formulae for the exact number of comparisons performed by different algorithms.
In other words, while log(n!) is no worse than n log n, it is still better than n log n.