A couple of my favorite types and functions in absl:
- Span
For when you want to take an array or a vector.
- string_view
Avoids string copies. Makes your program faster. Its creation was a reaction to situations like https://news.ycombinator.com/item?id=8704318. (It predates that revelation, but internally the plague of string copies has been known for some time.)
- Mutex
A better mutex with deadlock detection.
- Substitute
A string formatting function between 5 and 1000s of times faster than snprintf. Profile your code some time. If it's anything like mine, you might be surprised at how much this matters.
- base::Time and base::Duration
Excellent wall time utilities, similar to Go's wall clock time libraries.
"- Span
For when you want to take an array or a vector."
How's this different from taking a pair of iterators?
"string_view"
After having read all those articles about c++17's string_view, I still don't really get the real differences. OK, when parsing (i.e. using many substrings) it makes sense (but then you'd usually work with char* anyway); and maybe some people interact with C libraries that only take char* a lot (and in cases where they have to 'cross the boundary' often). So maybe in text-heavy applications (like I imagine C++ application for the web would be) it makes some sense, but I fail to understand why it's causing this much excitement.
"Substitute"
There are so many fast formatting libraries already - what does this one offer over those? Because it's generally a trade off between completeness/type safety on the one hand, and speed on the other. I use boost::format which is quite slow in benchmarks, but again I don't do much string processing.
"base::Time and base::Duration"
In what way are these different from boost::date_time and std::chrono? Here again the documentation (for Abseil) seem to be missing.
I mean, I'm all for batteries-included libraries - but this just seems to be a few loosely-related classes slapped together and the only reason it's even on here is because it's from Google. There are dozens of similar libraries languishing on sourceforge and github. Compare it to e.g. POCO - this is not even in the same league.
> "- Span For when you want to take an array or a vector."
> How's this different from taking a pair of iterators?
Safety and (a little) convenience. If your function takes a pair of iterators, there's no way to ensure (at compile-time anyway) that both iterators are even pointing into the same array/vector. Also, spans can ensure, with bounds-checking, that only elements within a specified sub-range can be accessed.
Such bounds-enforcement is particularly important in cases like, for example, SaferCPlusPlus' "TRASectionSplitter", which is a data type that allows you to partition an array/vector/whatever into subsections that can each be safely accessed/modified concurrently from different threads. ("TRASectionSplitter" is not yet documented, but for those interested, example code can be found here[1].)
> After having read all those articles about c++17's string_view, I still don't really get the real differences.
`string_view` is a much cleaner parameter type. `char*` and a `size_t` is two parameters. `string` is really a character buffer, accepting a character buffer as input is odd. `string_view` can be trivially constructed from most non-string data structures, including `vector<char>` and `array<char>`. And, now that it's standard, `string_view` is a very lightweight dependency to add to your interface compared to `boost` or hand-rolled alternatives.
To that last point, I'm not sure I'd use an abseil `string_view`, at least as an interface type, but I appreciate that they have migration to new standards as a design goal.
> I fail to understand why it's causing this much excitement.
C++ guys just like zero-overhead abstractions. The more we can get, the better. Passing std::string around has always been a sore point, and occasionally prompted misguided optimizations (like refcounting copy-on-write in g++).
Well yes, I'm what one would call a 'C++ guy' myself. Don't pass around, pass around const std::string&. What I see left and right is people saying 'you'll never pass a const std::string& any more!'. The way I see it: you'd pass a string_view in the cases where, in the past, you would have had to copy a part out of a string (which is rare, except in parsers), or when dealing with char* API's; what's better about passing a string_view than passing const string& ?
Nothing. But I would disagree that having to copy a part of a string, or having to pass part of a char buffer that is not a string, is all that rare, even outside of parsers.
1. Each time a thread acquires a Mutex it must later release it.
2. A thread may not attempt to release a Mutex unless it holds it.
3. A thread may not attempt to acquire an exclusive lock on a Mutex it already holds.
For basic sharing of resources between threads, using "access requesters"[1] can be safer and more convenient as they automatically take care of these rules for you.
And if you need to use the mutex directly, the SaferCPlusPlus library provides a "recursive_shared_timed_mutex"[2] (the one missing from the standard library), which allows a thread to hold multiple ("read" and/or "write") locks at the same time (relieving the "self-deadlock" issue). The mutex isn't documented, but it functions just as its name suggests.
Recursive mutexes? SaferCPlusPlus is not seriously recommending to replace standard/sane mutexes with their disfigured recursive cousins? Yuck.
I thought people agreed long ago that their only valid use-case was papering over broken/non-existing resource access schemes in applications of yore, so you could try to speed them up sprinkling magic multi-threading pixie dust.
SaferCPlusPlus does not recommend relying directly on mutexes at all. For most straightforward cases, you can use the "access requesters" to safely manage asynchronous access automatically.
Recursive mutexes are analogous to having multiple pointers (or iterators), some of which are "non-const", to an object in the same thread at the same time. Some suggest that this too is a bad idea. For example, the Rust language does not allow a "mutable" (i.e. "non-const") reference to an object to co-exist with any other reference to that object, even in the same thread.
SaferCPlusPlus does not necessarily disagree with this position, but it also does not require adherence to it, like Rust does. So if SaferCPlusPlus is going to allow multiple pointers (or iterators) to a shared object in the same thread, then it's going to need to lock the mutex protecting the object multiple times from the same thread. Giving each pointer/iterator its own separate lock, as opposed to having one lock encompass the all the pointer/iterators, allows the locks to be managed automatically, which ensures against data races, and that resource locks will be released as soon as it is safe to do so.
Again, there's rarely any reason you'd need to interact with the mutex directly. It's primarily there to support "access requester" functionality.
Also, note that this is a recursive shared mutex, not just a recursive mutex, which means that, for example, it provides a kind of "upgrade mutex" functionality. So if a thread has a "read" lock, it can obtain a "write" lock (blocking if necessary), without having to give up its read lock. Then when it's done with the write lock, it can release it without fear of losing its read lock (or blocking).
Thanks for the clarification. The analogy to multiple references (with ≥1 of them allowing for mutation) is an interesting one, though I don't quite understand why having that would imply a need for locking the same mutex multiple times. Will read up on the "access requester" mechanism though.
To my eyes, this code is much less clear than code that uses absl mutexes. Also, to my understanding, reentrant mutexes are much slower than ordinary mutexes, and are never necessary.
But, I have been using absl mutexes for more than ten years, so I'm a bit biased.
Anyway, the point of access requesters are that they are much safer than manually protecting resources with mutexes. That is, using access requesters eliminates the possibility of data races[1]. Which is important because data races can be particularly insidious bugs.
Just like how smart pointers can improve safety by automatically managing object lifetimes, access requesters can improve safety by automatically managing object lifetimes and asynchronous access.
[1] As long as you adhere to the rule that shared objects not have "unprotected" mutable or indirect (i.e. pointers/references) members.
To take from javascript, this is a mix of underscore (utility) and, uhh, babel(?) for essentially polyfilling certain runtime features slated for future (hypothetical) approval.
Boost contains over 120 libraries and counting, with over 70 authors and maintainers. There are many parts of Boost that I will never use (e.g. most of the pre c++11 libs), but the pros of having it available when you need it far outweigh the cons.
If a particular library is costing you more in compile time than it's worth in convenience, just rm -rf it from your Boost install -- better yet, write a script to prepend `static_assert(false, "this header is banned by decree of vvanders");` to the top of the banned headers.
There's plenty in Boost to love -- don't let a few rotten apples spoil the bunch.
So I should hack apart my install just to get to some basic level of compiler performance?
Last time I looked at boost pulling in any header would pull in all of them, that's why it takes so long to compile. In this day with any modern C++0x11 compiler I don't really see the need for boost.
I value my developing times more than anything. Boost is one library which is developed and tested on almost every c++ target platform, providing us with stable, tested, and performing common utilities to speed up your developments and run anywhere.
Reasons not to use boost:
- My code is one monolithic spaghetti and I do not use unit tests.
- I do not want to learn Boost.
- I don't trust template hippies.
Not using Boost is an irrational decision and you know it.