Off topic, but I once found a Magento SQL database backup from going to a robots.txt page for a company that I will not name. I told them how I was able to get to this file and laid out the other vulnerabilities with their site set up. I had access to all of their customer emails, all of their admin logins, encrypted passwords, and salts. Was able to find git credentials and a whole swathe of information that shouldn't be available...
They fixed the issue pretty fast, but what bugs me is that I didn't get a thank you. I wasn't looking to gain anything, but I figure telling someone that their site is exposed warrants some gratitude.
Still, I don't understand why more people don't sell the exploits to the highest bidder. It seems counter intuitive to me.
Maybe there are more people who sell the exploits and you just don't hear about it as much as people who submit them to the corporations before publicizing them.
Don't underestimate the value of money that's guaranteed instead of a hypothetical possibility, now instead of maybe sometime, yours free and clear instead of legally dodgy, that you can boast to friends and future potential employers about instead of hiding as a shameful secret.
On the seller side, how easy is it to actually get paid? There's no point in trying to sell exploits if you're just going to get cheated, get busted selling to some sort of undercover law enforcement, or just go to a lot of trouble for not a lot of payoff.
From a buyer perspective, you need to have a way to verify an exploit, or else you're just buying a pig in a poke. And you need a way to monetize the exploit, or some other motivation. And you really have no way to know how long your exploit will remain functional.
Easy. All the sting operation has to do is make it clear to the seller what the "buyer" "intends" to do with the bug. It doesn't even have to be overt: they could simply say "we are looking to pay $10,000 for a bug that would enable us to download all the private photographs from Justin Bieber's Facebook account".
This only applies to the US, as the laws are probably difference elsewhere. The CFAA[1] is a very vague and broad law that aims to stop people from accessing systems, sending malicious data, etc. It is intentionally written in such a way to be forgiving to the victim since security is hard by default [citation needed]. So even if you found an exploit without using it yourself, you'll probably be charged with aiding and abetting or something similar.
If you exchange money for an exploit that you know will be used to commit a specific crime, you are an accessory to that crime. The CFAA doesn't have much to do with it.
Selling exploits in general is not that legally risky†. Prosecutors have to prove mens rea at trial, beyond a reasonable doubt. People sell bugs to anonymous marketplaces all the time.
The question isn't whether selling Facebook bugs to the black market is itself illegal. It's whether the DOJ could set up a sting to capitalize on the greed of people who would do that. Yes, they could.
† It's not not legally risky, either, especially in the case of bugs like these, where you've been given permission to attack Facebook's servers only in conjunction with their bounty program --- your civil liability to a website that doesn't run a bounty, if you sold a bug you found in their site and it was used in some way to harm them, could be astronomical.
Yes, this particular class of bug isn't all that useful. If someone started using the exploit it wouldn't be long before a user complained that their comments were being deleted and then Facebook would figure it out in a hurry.
I see. So, the business plan here is: outbid Facebook to buy the rights to arrange the commission of, what, tens of thousands of felonies, in order to secure a marginal benefit for a competitor to the world's most popular photo sharing application, where those rights expire instantaneously as soon as one of the best security teams on the planet notices what's happening.
Best security teams on the planet? You are talking about Instagram? I've read multiple reports of their properties being completely owned in the last few months, just here. The fact that they are still up is a testament to the researchers who reported the errors to FB.
Also, many people invest in even worse and more fraudulent schemes. Publicly traded companies have scammed entire states and nations, costing dozens of billions to trillions of dollars, all while NYSE investors trade their stock like cash.
Edit: If you can't deal in facts, deal in downmods and unsubstantiated platitudes.
haha you can't be serious right? I am sorry but I honestly found that amusing. Like Instagram engineers scratching their heads as to why comments are disappearing while there's an up-rise of users and then someone finally decides to create a secure alternative and is upheld as the savior.
Realistically though if this an were to become big enough to promote an alternative then Instagram would be all over it and thus fix it within minutes.
Don't underestimate the professional value of disclosing a vulnerability to the company And I don't solely Mean the value of the publicity and the exposure of being public about the disclosure. The security community is surprisingly small. Getting a one off splash story in a rag like Business Insider is nothing compared to building a back and forth with Someone like Alex Stamos.
No, it would be worth much less than $10,000 to anyone else.
There is a specific kind of bug that is worth 6 figures on the black market: clientside remote code execution. Somehow, HN has gotten the impression that the going rate for the hardest bugs in the world to reliably weaponize is actually the going rate for all bugs everywhere.
So you're saying client-side remote code execution bugs are the hardest to reliably weaponize? Do I understand correctly? I figured those bugs would be the easiest to weaponize.
If you had this exploit, how would you have monetized it? Do you know who to talk to? Do you know where to go on the darkweb to find the people who know the people who have the money to actually pay you for this? Do you know how to negotiate with them to actually guarantee payment? Do you know how much an exploit which can only delete content -- not generate false content, or access ACL'd content -- is actually worth?
Long story short, companies offer guaranteed set-size rewards as a counterpoint to the black market's potential highly variant payouts.
Companies like Facebook offer rewards as an incentive to get people to report bugs to them rather than to blog posts. The next highest bidder anywhere in the world for bugs like these is ε.
> I know, right? $10,000! Facebook is worth billions!
But is causing monetary loss to Facebook, specifically, worth much to anybody? Anybody who would take the risk of committing a crime to do so?
This bug deletes content on Instagram. Unless you are the most underhanded of Instagram competitors, or just want to cause wanton Instagram picture destruction, I don't see why you as a third party would pay for it. Also, since I assume FB has backups, this is at most a relatively sophisticated DOS attack. Now, if you could insert data then you have stage 1 of a APT deployment platform, which is a whole other story.
Also, you underestimate the lifetime potential earnings won of "I discovered an attack on one of the 2-3 most popular internet platforms on earth at 13 and practiced textbook responsible disclosure with it". Beyond that, selling bugs to the highest bidder is very hard to justify, ethically speaking, and a lot of people put a high price on their integrity.
A bug that allows unauthorized children to delete content from other user's accounts may point to other vulnerabilities, which could have even more value.
IIRC FB/Instagram didn't payout on a report that took their entire AWS keys though...
I'm having trouble connecting your first sentence to your second. It sounds a little like saying "so, the US army has M109 Howitzers, so maybe they'd be interested in this 3D-printed zip gun I just made."
I'm not talking about the vulnerability that this kid found. I'm talking about vulnerabilities that would allow access deep into the Facebook infrastructure. I think there are in fact some vulnerabilities the NSA would be willing to pay more than $10k for if it would allow them long term access to a lot of sensitive Facebook data.
If I find a wallet on the ground, I try to return it to the owner even if there's no reward and even if I'm not legally obligated to do so. I'd want someone who finds my wallet to do the same.
I don't think it's that simple of a decision, or one that could be generalised to all cases. What if you found an exploit that let you break a widespread form of DRM, access normally paywalled information, or gain control of a device that you rightfully own but the manufacturer locked down against you? It really depends on your moral/philosophical stance, but all the money in the world wouldn't make me report any of those. That's why the idea of someone so young already thinking of becoming a "security researcher" bugs (no pun intended) me, because they all seem to inevitably end up proponents of "security that oppresses" in some form or another.
"Leaking documents, expropriating money from
banks, and working to secure the computers of ordinary people is ethical
hacking. However, most people that call themselves "ethical hackers" just work
to secure those who pay their high consulting fees, who are often those most
deserving to be hacked."
That, and it's great that Facebook offers an incentive to bug hunting, rather than treating the researchers like criminals, as some companies are known to do. I don't care for a lot of what Facebook does and how they do it, but this in particular is awesome of them.
I believe that particular opinion has been discussed to death here before, so I won't address it.
The point I was trying to make is the market value seems wildly different from what Facebook pays out. The pricing gives me the impression that Facebook's bug bounty program is more concerned with public relations than it is about improving Facebook security.
Do Facebook face some sort of liability under COPPA for allowing [condoning?] this under 13 yo - I'm presuming without verifiable parental consent prior to use - to use their services?
Perhaps the time for Facebook to fight COPPA (for better or worse) is coming soon?
Anticipating this objection I looked at some COPPA info briefly (I'm in the UK, in not that familiar with USCs) and it suggested that the jurisdiction was based on ___location of the controlling company or the servers (either being sufficient) and not ___location of the children accessing the service. That makes sense as otherwise company's could just use offshore servers and bypass the regulation.
Trying to remember that other incident, not with facebook, maybe microsoft, where it was a teenager and they wouldn't pay them because they weren't 18+
You've forgotten the biggest of them all: Amazon [1].
[1]: I consider them the biggest of them all considering how much of the web is powered by AWS (just imagine if you found an exploit to give you full access to all of AWS), but that's just my opinion.
what about cisco, oracle, juniper, fireeye, palo alto, etc.? There are more companies who don't than do, hopefully it changes in the near future though.
"The problem lay in a private application programming interface (the slice of code allowing certain outside access) that wasn’t properly checking the person deleting the comment was the same one who posted it, the spokesperson added."
jesus when I was 10 years old I was barely programming on actionscript, which is now dead. Now at 28 I can't even make that kind of money in an entire year
Payed out to over 800 researchers? Wow that's a lot of security bugs. I wouldn't have guessed so many were possible. Imagine if they didn't have such a program!
Because it's an extraordinarily silly comment. If Facebook hadn't paid $10,000 for this bug, the next bidder in line would have been unlikely to pay more than $50.