Yes. AFAIK, it evolved over time across 3+ package managers (`npm`, `yarn`, `pnpm`, etc), but the current state of that ecosystem is similar (including the behavior of dependabot).
Python's Poetry has poetry audit as well, and there are third-party tools such as Safety (Python), Nancy (Golang), etc. Lots of languages have something like this.
They support lockfiles and tools like `audit`, yes. But they do not support having multiple versions of a dependency.
Tools based on loading libraries from a *PATH (Go, Python, JVM) usually do so by grabbing the first one that they encounter that contains the appropriate symbols. That is incompatible with having multiple versions of a package.
On the other hand, Rust and node.js support this -- each in their own way. In Rust, artifact names are transparently suffixed with a hash to prevent collisions. And in node.js, almost all symbol lookups are accomplished with relative filesystem paths.
Follow the sun is great for on-call rotations, but I'm my experience, for regular project work, the need for a handoff each day ends up being too much overhead. The teams in vastly different time zones wind up working mostly independently on completely different projects.
I like my job (I think..) and am on my third burnout. There's no worse feeling than being barely able to muster pinky finger strength equivalent brain processing power over an 8 hour working day, on high viz projects with tight time lines and daily stand up reporting on progress. Today I literally noped to bed with the laptop, looked at the problem a few times and concluded it can wait until Monday.
This is so interesting. How do you manage to catalog all the subreddits in existence? Is there a page which lists them all? I assume the process from there is retrieving by most recent
> How do you manage to catalog all the subreddits in existence?
Oh, I don't want to give the wrong impression. I'm not cataloging anywhere near _all_ subreddits. Or all of anything. More or less I started one day with one subreddit and built a system that just churns through what's there. The API is limited and there's only so many creative ways to request the data (while staying within TOS) - as I've wanted to remain able to function I've made sure to stay within the boundaries set forth.
Rather than try to get _everything_ (there's services out there that have databases of a lot of past/current reddit data) that ends up stale data (which may be useful for a content farm) I'm interested instead in a relatively accurate picture.
This project initially grew out of an interest in building an automated moderation bot to help out subreddits being spammed with content from accounts that are so obviously spam when the content was posted that it's astounding it ever makes it live. A few months into developing the initial crawling/database/hashing setup and getting things all tuned up they announced the API changes and I lost all interest in the moderation aspects but had enjoyed using it as a test bed for learning new things. (I came into this having no idea what a hamming distance was)