Hacker News new | past | comments | ask | show | jobs | submit | aandon's comments login

I've been using them in SF and it's a nice perk to not have to worry about filling up. The bigger perk is just having someone more diligent than myself checking all the stuff I let get too far: wipers, tire treads, headlights, etc


aaaaaaand it's not on the App Store


It's mentioned in the blog that it's going to be rolled out slowly over the next few days.


But there's a link. And clicking it doesn't work


My guess is sellers absorb this as a cost of doing business. As long as fraud rates remain in low single digit percentage, it's probably not worth it for the seller or Amazon to fight it.

It is always fun to dissect these scams. There's an interesting e-commerce scam prevalent in India that popped up as the country embraced e-commerce without much credit card infrastructure in place (most online orders are paid to the delivery person in cash) https://simility.com/delivery-fraud:

"The fraudster businesses ordered hundreds of products from the victim’s website to be delivered on a daily basis. Meanwhile, if customers came into their store asking for an out-of-stock product, they were told it would be in stock later that day. Then the fraudsters paid the delivery person in cash for the small fraction of products they had pre-sold to customers, while returning the vast majority of unsold products without paying for them at the cost of the e-commerce company, thus completing the delivery fraud cycle."


Indeed, fraud and shrinkage are costs of doing business for practically any enterprise. My worst experience was being a nice guy and accepting a personal check, which of course bounced, so I was out my cost-of-goods plus an additional fee from the bank.

My main protection is that my cost-of-goods is about 1/5 of my selling price. Also, it's a niche market product that would probably be hard to re-sell. Because of my healthy mark-up, I can afford to handle practically any dispute by offering a prompt refund.

It's tougher if you're not a retailer, but an individual selling a few items, in which case you don't have a mark-up or the law of averages to fall back on.


Great Snowden interview (by Neil deGrasse Tyson) where he explains why collecting "only metadata" is no excuse: https://soundcloud.com/startalk/a-conversation-with-edward-s...


Agreed, the best thing you can do is Google "_____ product review" and look for credible reviewers, someone with a reputation to protect. I worked with a product company where v1 had some product flaws and started receiving negative reviews, but it was very easy to overwhelm those with fake positive reviews across the web. Then when we came out with a much better v2, we got a scientist in our field with some following to do a short positive video review. That is difficult to get (I'm not talking about YouTube/Instagram/mommy blogger reviewers who give positive reviews to anyone who will send them free product) and there's no way he would have put his name on the line unless he had tried out the product on his own extensively. Any product company worth its salt should know this and get a few of these expert reviews posted to the web.


Product manager at fraud detection company Simility here. I'm very surprised Facebook hasn't put more effort into curbing fake accounts, makes me think it's very low priority for them. We have social network customers who are much smaller than FB, yet have gotten their fake account rates far below FB's.

One effective strategy we've employed not mentioned here is category mapping: if an account of type A, only targets accounts of type B for likes (especially if they ignore categories C, D, etc.), this is usually a high indicator of fraud. For example, one very common strategy is to create a fake account for an attractive female to friend many male accounts (especially relatively new accounts unaware of these tactic). This can be easily detected by analyzing the gender and account age of all targets and coming up with a diversity score. Low diversity score = likely fraudster.


> We have social network customers who are much smaller than FB, yet have gotten their fake account rates far below FB's.

The incentive to make fake accounts on Facebook is orders of magnitude greater than almost any other social network.

> One effective strategy we've employed not mentioned here is category mapping: if an account of type A, only targets accounts of type B for likes (especially if they ignore categories C, D, etc.), this is usually a high indicator of fraud. For example, one very common strategy is to create a fake account for an attractive female to friend many male accounts (especially relatively new accounts unaware of these tactic). This can be easily detected by analyzing the gender and account age of all targets and coming up with a diversity score. Low diversity score = likely fraudster.

Facebook has methods that radically exceed this method in both complexity, precision, and recall.


> The incentive to make fake accounts on Facebook is orders of magnitude greater than almost any other social network.

Not true. In the article, the writer pays Russell $15 for 1,000 likes. Being generous and assuming each of Russell's fake accounts can farm out 100 fake likes, he's making $1.50 per fake account before it gets shut down. Compare that to social networks where you can directly extract payments from other members by listing fake items for sale, laundering payments from fake credit cards (on other fake profiles) to yourself, or link-baiting other users. A single successful fake account on those networks can easily net you $100.

> Facebook has methods that radically exceed this method in both complexity, precision, and recall.

Agreed, and indeed Simility's models have much more complex methods too, but a) I wanted to post an interesting example everyone here would understand and b) I still say Facebook is not using anywhere near its full ability to stop these fake profiles given how rampant this fraud scheme is on their platform. (Again, follow the money, FB has very little incentive to stop these fraudsters who are only inflating their own numbers. It's important to keep them in check, but there's no incentive to waste resources stopping them.)


> Compare that to social networks where you can directly extract payments from other members by listing fake items for sale, laundering payments from fake credit cards (on other fake profiles) to yourself, or link-baiting other users. A single successful fake account on those networks can easily net you $100.

You're comparing apples to oranges: fake accounts used for "like spam", etc. are different with regard to their complexity and scalability than accounts used for phishing. There are phishing accounts on Facebook as well.

> Again, follow the money, FB has very little incentive to stop these fraudsters who are only inflating their own numbers.

Facebook has a massive incentive to stop fake accounts: fake accounts decrease meaningful conversions that lower the ROI for advertisers, which is tracked carefully both by Facebook and advertisers. This directly lowers the price for ad space on Facebook, and makes Facebook look noisier and less impactful than other channels.

Following the money leads a direct, unmistakeable path to a strong incentive to shut down fake accounts.

It's also very bad to accidentally shut down real accounts, especially in cases where users could be confused enough not to return.


> Following the money leads a direct, unmistakable path to a strong incentive to shut down fake accounts.

I think the difference in opinion on that is that if you look at the long term, which you hope FB are, then yes fake accounts that reduce ROI for advertisers are bad. Unfortunately, they lead to a short term increase in FB ad revenue, which disincentivizes stopping fake accounts too effectively, as it may actually be a noticeable dip in revenue, depending on the scope of the problem.

In a worst case scenario, FB might be in a situation where 20% of ad revenue is from bad impressions, and completely stopping that, if they had the power, would have major negative repercussions for the company. There would need to be some hard choice made about the best path out of that situation. Not that I think this is necessarily the case, but it is an example of how the incentives may not be as clear as they seem.


> Facebook has a massive incentive to stop fake accounts

As an advertiser, I am pretty certain this is not the quite case in (current) reality. A large part of FB's proposition for more money from us includes:

A) Pay more for increased reach and engagement.

B) Our traffic isn't decreasing (despite outside reports/indications to the contrary) and you would be missing a massive and engaged audience if you didn't spend with FB.

This combined with the fact that a _lot_ of ad-spend isn't directly attributable to conversions (often by design), means that more "activity" whether its fake or not, drives up ad-revenue for FB.

You see the same issue occur with other publishers by the way. -It is not uncommon for a publisher (or other related party) to purchase a swarm of fake bot traffic to boost impression and engagement numbers of an ad buy they've sold. -Advertisers un-aware of how much of the traffic to their ads are bots vs legitimate humans (read: "publishers stealing money from advertisers") is a major problem for advertisers, but the bigger the publisher, the harder it is to 'not' be on their platform too. (and FB is _very_ big)


> B) Our traffic isn't decreasing (despite outside reports/indications to the contrary)

It's not. Even the "leaks" make clear that overall traffic is still increasing, both overall and per person.

> This combined with the fact that a _lot_ of ad-spend isn't directly attributable to conversions

I've seen direct reports from advertisers at my last job (doing social media analytics) that show how well they can quantify ROI for ad spend. Fake accounts would negatively impact this number, and it would be extremely obvious immediately.


True. It is like saying Windows has far more viruses than Linux and MacOS but often it is because of incentives and market share rather than lack of efforts on parts of MS to curb viruses.

But I don't think FB has put lot of efforts. I was being targetted by some fake account which was a Facebook profile of a company (created as user). I complained and reported the user several times. Facebook has not taken any action. From what I can see a simple regex on name should tell that "Taylor Swift Lover Group Admin" is not a human being and cant have a facebook account.


For big companies that benefit from being able to say they have lots of users there is a big incentive not to be good at finding fraudulent accounts. I worked in Big Data Analytics at one of these companies $10B+. We were separate from the fraud department. They'd filter out the fraud accounts and we'd have to re-filter because their behavior was so out of whack it would mess up our analytics. We tried to move our filters upstreams and teach the fraud dept how to identify these accounts but absolutely no-one was interested. Also it's common to have bonuses tied to user numbers.


Facebook generates money from ad views, even if they're fake accounts. There's very little incentive for Facebook to mass remove fake accounts.


Hi. I worked on Facebook's anti-abuse infrastructure for awhile (I'm still at Facebook, but working on different things now). So while I didn't personally fight spam/fake accounts, I worked closely with those who did. I'll be blunt: based on this and your other comments, you don't know what you're talking about.

I'll go a step further and give you some unsolicited advice. The anti-abuse community amongst internet/game/tech companies is actually fairly close knit since it's one of the few places where everyone is on the same side and lots of "secrets" are shared (including, even, at the spam fighting conference we organized last year). I would bet a lot of people just rolled their eyes while learning of your company for the first time. You're already entangled in one argument from someone calling you on this silliness, but I assure they're not alone. I'd probably suggest reconsidering this approach.


I'll be blunt, too. I'm interested to know why it's so easy for Facebook users like me to spot fake accounts and report them, while your crack team at Facebook constantly ignores them and allows them to continue proliferating. I'm guessing you didn't get an inside look at Facebook's accounting that disincentivizes removing these fake profiles. Or do you have a better reason that Facebook repeatedly ignores reports of obviously fake users?


Indeed I know it's a close-knit community. Most of our 20-person team came from anti-fraud teams at Google. I'm guessing the "silliness" you're referring to is the talk of Facebook not being incentivized to block spammers. I think kbenson articulated best what I was trying to say, that there are tradeoffs in blocking good users and decreasing apparent user volume when fighting fraud. Facebook would obviously not be wise to catch every single fraudster because there would be a high number of false positives, so a balance must be struck. As I'm sure you know, fraud teams at many companies often clash with the marketing team because they're protecting the bottom line (sometimes at the expense of the top line) respectively, and vice versa.


I worked at a company with a spam variable in the backend. 0 for eliminating most spam engagement actions like likes. 1 for letting all spam in.

We didn't set it to 0.

There's sometimes positive value in spam. Ex Instagram users get a boost when their pictures are liked, by someone real or not.


Wow, what a statement from someone so close to it. Not sure how to look at it.

We recently bought some likes for a page via FBs internal system. The likes we eventually received were nearly 100% identical in terms of names, looks (mostly arabian or oriental), even though the region we targeted was within central Europe - and lot's of obvious fake accounts in there.


I used their Pixel + Create an Audience tool to target a Page Like campaign at people who have visited my businesses' website previously. Very low spam / fake account % on that campaign.


I'm curious: do you think Facebook's anti-fraud measures are effective?


Didn't Friendster spend an inordinate amount of effort on detecting fraud accounts?


I know Facebook delete a bunch of fake accounts about 9 months ago.


PM from a fraud detection company here. One thing I didn't see mentioned on this thread is Device ID, which is very common on fraud detection platforms. When a user comes to your website or mobile app, you have access to hundreds of signals from their device. Some like IP address are easy to spoof. Others like whether the user has changed their phone alarm from the default settings are often ignored by fraudsters but surprisingly telling signals (fraudsters don't bother to change from default settings). We wrote an article on some interesting findings recently here: https://simility.com/device-recon-results/. A good device ID product can not only tell if the same fraudster is accessing your app repeatedly while pretending to be different users, it can detect risky user profiles when they land on your app. Before they even make a payment.


Just a thought: have you ever considered that by publishing such red flags for fraud, fraudsters will adopt these "organic" behaviors in order to appear more legitimate? I understand that the idea is to make illicit transactions more difficult and that adopting these "organic" behaviors is more difficult, but automated fraud tools (ie - what most 'script-kiddies' use) also become more sophisticated over time. Regardless, I bet you don't publish ~all~ your fraud detection vectors for that exact reason.


I'd be surprised if all of the published vectors are genuine, too, for the same reason :)


> Device ID

It's incredibly easy to dupe and manipulate. If someone is determined enough, they can just edit the packet before it hits your server, or install another app/font/package/etc to change the fingerprint. "Well what about IMEI?" see reference to intercepting packets.


You can use Valve's browser fingerprinting library. Its good enough to detect basic guys who are jumping through proxies. Combine that with MaxMind's proxy detection service and its a decent starting block.


Interesting ... If u have a device id running on ur site , how do u tie a 'suspicious user' it flags with the orders made by that user ? I read abit about ur product and it's not clear how a web shop like candy Japan would integrate quick and dirty with this


Normally an order on your back-end is linked to our device ID with a session ID. However our device ID can also accept user-generated data within fields on your website/mobile app. So if your customers enter their email address during your checkout process, that email will be tied to device ID and you can then look up suspicious orders by their email address.


Google Sketchup is an unsophisticated but good quick-and-dirty modeling app. It wouldn't be a huge leap to see it built in a CSS framework like this.


SketchUp[1] is built for ease of use for in mind for people who are not 3D pros but there's actually quite a lot of clever things going on under the hood[2] so it would not be a weekend project to reimplement. But yes, a web based clone would be awesome.

[1] It's now owned/developed by Trimble, Google sold it to them ~4 years ago

[2] http://mastersketchup.com/sketchup-inference/


The professionalization of the peer-to-peer space is getting incredibly interesting. Hotels must be realizing that hosts are starting to offer places that are essentially like hotels... only with much more diversity, many more locations, more unique features. Tools like this are definitely driving some interesting changes in the space.


Yeah, very cool to see tools like this coming to peer-to-peer hosts.

And, I remember a few years back, when peer-to-peer accommodations were considered a novelty.

Now, it's pretty clear that I will probably spend most of my future trips renting someone's home, as opposed to a hotel. Hotels must be shaking...


> Hotels must be realizing that hosts are starting to offer places that are essentially like hotels...

Hotels realized that very early on in AirBnB's operations, which is why they complained about the conflict between what AirBnB and its users do and the laws governing hotel-like operations.


Interesting idea, this is the first virtual hackathon I've ever seen. Wonder if this is the first time someone's ever tried this. Could be huge


Global Game Jam [1] and Ludum Dare [2] are two other 'virtual' hackathons.

[1] http://globalgamejam.org/

[2] http://en.wikipedia.org/wiki/Ludum_Dare


The VR Jam the Oculus folks put on also did this.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: