I mean, I don't know how there would be? Unless they were scanning the text of every pop-up for words convincing the user to enter their computer password. There would be no way to determine intention without some sort of language analysis.
I agree with you. However, in this case, I was abusing a legitimate OS prompt (not just making my own), so I don't know if a security image would be a barrier there. It would definitely be one for instances where malicious apps make their own pop-ups.
I mean, as others have mentioned, actually true capabilities would be nice. But as long as we're going to have a database, it would have to end up in user space or in the kernel. And I'm not sure how much I like either option.
Damn, that really puts things into perspective. Granted, attached modals presuppose there's a window to attach to. But I think that would probably be true 9 times out of 10.
Even if not, attaching to the menu bar (with some “chrome” that goes above the menu bar itself, which normal windows don't (can't?) do) would be superior to attaching to nothing.
Yeah, to be perfectly honest, I understand. I think TCC is meant to be the primary consent system, but there are others (such as the Authorization system, and the Service Management framework).
As someone who dove deep into keychain items for a previous write-up, I believe you are misunderstanding this situation. As far as I understand it, many keychain items can be stored in your iCloud keychain. However, your local machine can have its own keychain that's different than the iCloud keychain, with items that are not sent to iCloud.
And besides all that, to my knowledge your local machine password (the password you use to login) isn't stored in a keychain item, so there's no way it could make itself into the iCloud keychain, or your local keychain.
You may be mistaking some explanations. Your computer password is used to unlock your local keychain, but it itself is not stored in your keychain. Your local keychain is also not your iCloud keychain, it's not stored in iCloud.
Again, I'm not an Apple developer, so there may be stuff I don't know, but I am a developer in general and I have researched this. The above is my current understanding.
> many keychain items can be stored in your iCloud keychain. However, your local machine can have its own keychain
Yes. That's pretty obvious to anyone who opens Keychain Access.
On the left you will see the following under "default keychains" :
- login
- iCloud
> Your computer password is used to unlock your local keychain, but it itself is not stored in your keychain.
Yes. That's a fundamental, and again obvious requirement. Your keychain has to be encrypted somehow, and this is (IIRC) derived from your user password.
Software developers can further secure keychain by using enclave tied keychain entries[1].
An important correction, so hopefully this bubbles to the top (this will be appearing on the post as well):
A previous version of this article mentioned below that this CVE was patched in macOS Sequoia 15.5 et al., but I was a bit mistaken in that. Despite being released today as well, it appears that macOS Ventura 13.7.6 and macOS Sonoma 14.7.6 are not patched against this vulnerability.
I wrote that sentence assuming that Apple would have included a patch in all of the releases. It was only later, when I checked the security release notes, that I saw I was not credited under the other two releases. I reached out to Apple to clarify if these releases were patched. As of writing, I have not heard back.
I chose to do my own testing and spun up a virtual machine. After some difficulties I got it updated to macOS Sonoma 14.7.6 and was able to compile and run my proof of concept. It still worked. I would assume the same is true for macOS Ventura 13.7.6. I'm not sure why Apple didn't include the patch in these two releases.
I will update the post when I have more information and/or context.
Thank you for your kind words. To respond: 1. I'm not a "he", I would prefer "they". 2. As I mentioned in another comment, I have not received word back yet on any reward.
I think their "Hall of Fame" (or at least whatever people colloquially refer to as that) is their credits for people who found bugs in their web servers, so I don't think that counts here. I did get credited, so I'm happy about that. Now I just have to wait and see if they determine it's worth a reward (and, if so, how much).
It is absolutely worthy of a reward, and it should be worth a few months of your time. This is a nasty security issue, and you showed a ton of restraint not losing patience with Apple.
Honestly, it's bullshit that you don't already know whether or not you're going to get a bounty.
I will definitely admit, it can be a bit of a pain point that Apple sometimes takes a lot of time to determine a bounty. I'm just waiting patiently now to see what they say. I appreciate your kind words and encouragement.
As someone who's looked into the internals of macOS for a bit now, this is all incredibly fascinating. However, I am curious: do you think capabilities could be implemented like this at a really low level? Part of me thinks we have the security models we do in POSIX is because they're simple enough to represent in C code.
The capability systems you're mentioning sound cool, but they sound a lot more complex. And if that's true, and they aren't built with irreducible complexity, then it would be possible to work around it by just pulling out bits and pieces from the system and abusing them.
SeL4 is a capability based operating system toolkit, entirely implemented in C. The core operating system is just a few thousand lines of code. Its even mathematically proven to be bug free - which is totally insane.
It even uses a capability to allocate (assign) memory. So you typically have a microservice (userland process) in charge of memory on the whole system. Other processes get heap memory allocated to them by asking that service for it. (Though typically you'll allocate large blocks, and divide it up using a normal allocator).
I think Apple uses an L4 variant for their SEP co-processor, though I'm not sure if it's that specific one. Sounds like another OS I'll probably have to do a deep dive into at some point.
As that page points out, POSIX file descriptors are effectively c-lists. A capability operating system would use similar mechanisms to control access to resources other than just open files.
The other things GP mentioned (logging, interdiction, UIs for visibility/control, etc) are layers that you would implement on top of the lowest-level capability system.
Ah, thanks for the reference! Yes, there are a lot of very old capability systems in computing history.
I've got a copy of Capability-Based Computer Systems on my shelf that I've been meaning to read for a while, and it covers the Plessey System 250: https://homes.cs.washington.edu/~levy/capabook/
Very much not a new concept! Though note that this book was published in 1984 and there have been several newer developments in the capability literature since then. (Revocation for example, which is mentioned as an issue in chapter 10 but has since been addressed with some capability design patterns.)
reply