Besides being slow, there's also an implicit salt, so rainbow tables to quickly check every account for "password" don't exist. Still, if you just used a simple dictionary word present in e.g. /usr/share/dict/words (my system has 234,937 entries), you don't have as much time. I have a Ryzen 9 5900X, 12 cores; using a random Go implementation of bcrypt I found with default work factor of 10 and going through that dictionary with 24 threads, it takes my machine about 18 minutes to get through every entry. A thousand years if I wanted to go through 31 million accounts and each one was a worst-case at-the-end value. But there are quite a few more than a thousand of my CPU or better out there, some surely part of botnets which routinely number in the thousands of devices, and probably faster bcrypt implementations. Earlier this year, the FBI dismantled a botnet with 19 million infected devices globally and over 600,000 US IP addresses. Surely some of those were weak IoT devices, but still, there's a lot of compute available to bad actors such that you shouldn't necessarily rely on bcrypt et al. to protect a very weak password. (They are rather good at protecting normally weak and mid passwords, though, and there's opportunity cost for all that compute.)
If you don't reuse that password anymore, does it matter tho. Some services might use older hashing for older passwords without updating the hash algorithm. But I don't know what is the case here.
I would hope that a system competent enough to migrate to bcrypt would also be competent enough to rehash the entire database as well. Logins check bcrypt(oldHash(pw)); if it matters they can be updated to bcrypt(pw). Of course, "Hope is not a strategy".