You’ve saved me tens if not hundreds of hours with ripgrep, and I’ve become a huge evangelist of it at my workplace. When I’m helping someone understand how to debug customer issues, the first thing I tell them is to install ripgrep. Truly a fantastic piece of software.
Any time! Yeah jq is great, but my usecase is covered better by these two together.
Truly the Unix philosophy at it's finest. It's the only way I search JSON these days! (or YAML with "yq | gron | rg" to get results, pop into (n)vim and to my thing :)
Sorry, how does ripgrep save you tens of hours? I get that it is faster than regular grep, but that doesn't really answer the question; I don't find myself stalled waiting for grep. The only reasonable explanation would be something ripgrep does that grep actually doesn't. I could try to guess, but have no confidence I would guess right.
1. Available on Windows without WSL, possibly the biggest time saver for those affected
2. Auto ignores .gitignore files, so does not search node_modules or build/ etc., huge noise reduction there.
3. -t/-T gates on file extension which is a very nice feature, again signal to noise
4. The combo of speed and the above and the recursive-by-default mean that you search much larger corpuses by default, like “all the microservices in the cluster” or “my entire home dir”, because you know it's some .xml file mentioning “jackson” where you saw this config you need before.
5. For some reason I never remember which regex features are grep vs egrep, so I end up just testing on a bunch of strings to see if I have to like backslash the plus operator or whatever. With rg it's like “oh this is going to have the same syntax as JS regex.”
6. Unicode compatibility by default could save you that sort of time maybe on specific workloads?
Totally anecdotal, but I have found ripgrep orders of magnitude faster than grep when searching a large corpus of data (in my case, many multi-hundred megabyte to gigabyte XML files). As in, ripgrep completed the search in seconds, grep took multiple minutes. I'm sure I could have done some research to optimise/parallelise grep, but ripgrep worked doing the "dumb" search.
I use ripgrep on git bash for Windows, and my team members act like I’ve got searching superpowers. Searching on windows is such a pain and this makes it easy. Thanks so much for making this fantastic tool!
I use ripgrep and "everything" all the time on windows to find files and where I put stuff :) . My filing system leaves something to be desired, as does my many idea test folders.
`ripgrep` is my favorite of the "new" linux utilities, it makes searching for a single string across all of my cloned repos extremely easy, and especially for diving into multiple versions of vendored dependencies.
`ripgrep` is my favorite of the "new" linux utilities
Yeah, but only when used together with fzf, the other favorite new cross-platform shell utility. I mean, after rg spits out a list you do want to narrow it down and then do something with the files in that list, right?
You should not have done this unless you want to further normalize the practice of namespace squatting. This is the same type of behavior leads to ___domain squatting. While arguably being slightly more benign in the sense of hedging against typosquatting, if everyone started going things like that, we'd quickly begin to run into namespace exhaustion problems as people started ballooning their package namespace footprint.
Before you do something like that, always ask yourself: "What if everyone else started doing this?"
If the result feels like a nightmare in the making, don't do it.
> Before you do something like that, always ask yourself: "What if everyone else started doing this?"
No, I don't think so. There is no universality implied in my comment or in the specific practice here. You can make value judgments based on specific circumstances. For example:
* How many people try 'cargo install rg' and have it do the wrong thing? I'd say "probably a lot."
* Is 'rg' on its own something that is a likely useful or desirable name on its own? No, I don't think so.
This doesn't have to mean that everyone should do it for every possible alias of every crate out there. You can say things like "yeah I think it makes sense to squat a name here to improve failure modes for folks."
Other than that, I have squatted a few names before. I don't see anything wrong with the practice in and of itself. It's when it gets abused that it starts to become a problem.
I've worked on various package management ecosystems for close to a decade now, and I wouldn't qualify this (if 'burntsushi had done it) as namespace squatting. It's clearly not an attempt to reserve a name for unspecified future use (or as a potential typosquatting target); it's the name of the binary installed by the crate and an obvious mistake for an installing user to make.
Even flat namespaces are virtually infinite; a couple of extra names that correct user error do not pose a serious exhaustion risk.
I'd like to note there are three perverse incentives that lead to abuses of public namespaces (that I am aware of - please tell me if I've missed any):
1.) The use of names as a speculative financial instrument (in all shades of grey, up to and including extortion for lapsed or stolen names)
2.) The use of names as vectors of attack, such as by exploiting typos or homographs (such as malicious packages)
3.) The reserving of names you don't have a sincere or immediate intention to use (hoarding/FOMO)
This isn't very much like the situation with domains, which is primarily a result of #1 (there is no market for crates.io names, as far as I'm aware). #3 is a problem to some degree on crates.io, my understanding is that they basically treat this as a human moderation problem. #2 is endemic to all package managers.
By putting a helpful instead of malicious package here, the community (and Richard Dodd in particular) are able to mitigate the hazard of #2 (unless this account is compromised or turns malicious - a better but imperfect situation). If a project called `rg` comes around, they can appeal to moderators to get this name, and probably succeed (as if this were a #3 problem).
This isn't a perfect way to do things by any means, but it seems like a decent balance of concerns to me.
> #3 is a problem to some degree on crates.io, my understanding is that they basically treat this as a human moderation problem
I think it's more accurate to say that they consider dealing with this out of scope. "I want this name that has been unused since it was added as a placeholder package 7 years ago" is not something that the human moderation will help you with. The extent of human moderation on crates.io is basically "This is malicious or illegal and was reported to us and we looked and agreed so removed it"
Someone else already has 'nyt.com' ('rg'), GP is saying (not saying I agree) 'nytimes.com' ('ripgrep') behaviour encourages other someone elses to do that sort of squatting, where they don't own the thing that is clearly intended.
I'm fully in support of exhausting namespaces in programming languages. It's really annoying that people keep making one-off projects with weird names that reinvent the wheel.
In CPAN, you create a module with a hierarchical name (Net::LDAP), and people inherit from it and extend the namespace to add new functionality (Net::LDAP::Batch). Finding a package that does what you want is [relatively] easy. Old code gets maintained rather than somebody reinventing it for the 72nd time with a hodge-podge of functionality.
(I'm the author of ripgrep.)