In the very few occasions that I've seen a queue backed by a Postgres table, when a job was taken, its row in the database was locked for the entire processing. If the job was finished, a status column was updated so the job won't be taken again in the future. If it wasn't, maybe because the consumer died, the transaction would eventually be rolled back, leaving the row unlocked for another consumer to take it. But the author may have implemented this differently.
That's a good approach if the worker is connected to the database.
If an external process is responsible for marking a job as done, you could add a timestamp column that will act as a timeout. The column will be updated before the job is given to the worker.
Not the author, but I've used PG like this in the past. My criteria for selecting a job was (1) the job was not locked and (2) was not in a terminal state. If a job was in the "processing" state and the worker died, that lock would be free and that job would be eligible to get picked up since its not in a terminal state (e.g., done or failed). This can be misleading at times because a job will be marked as processing even though its not.