Hacker News new | past | comments | ask | show | jobs | submit login

OA quote: ""lower league English football" wasn't available"

Possibly explains the recurring need for a fake one.

Serious contribution: bots faking it with other bots generating large numbers of apparently real social profiles. Not just gf/bf whatever but other forms of relationship (fake boss asking for more work this weekend as a cover for...).

Ideal chaff for the surveillance society.




I think if you're going to undertake large-scale chaffing of the internet, it'd be way more fun to target the future rather than the present.

Don't just create a hundred million fake profiles with their own social networks: try to create alternate timelines. Profiles on social networks, emails, instant messages, dating profiles, recontextualized and faked photographs, websites and DNS entries and forums, all supporting an alternate timeline, smushed as thoroughly as possible into the internet.

Don't just try to confuse current surveillance states, try to confuse future historians!


I believe (some?) reputation management companies already do similar things.


I ran a website like that, it was all scammers. On both sides. I left because I didn't want my name attached, but it organically attracted both scummy type 1 users and scummy type 2 users, where both were necessary for interaction to take place.

EDIT: I should add that I wasn't made fully aware of the depth of the scamminess of the whole thing until I accidentally tried cleaning up some "offers" and found it was all scummy offers to scummy counterparties.


How did the scams work? Why were the users scummy? Was it just faking social interactions?


The scams were hardcore; yes; and no. No fake social interactions, it was actually just scammers trying to get people to order goods on the pretense they would then ship them to Buenos Aires, for example. That example, oh man, I couldn't believe how sketchy, how beyond sketchy, that website was. Or is.


It wouldn't be too hard for the surveillance society to distinguish the real profiles and bots(Edit: or people pretending to be different people).

The US Government has built up such far reaching surveillance tools, of which we have just seen the tip of the iceberg, that this chaff that you are talking about wouldn't be chaff at all.


These aren't bots. These are people, pretending to be other people.


Either way, it would be very hard to generate significant chaff that the surveillance guys would go after.


Maybe. But the thing is that they're going after everything. Add some hot key words. Back in the day, we used sigs for our Usenet posts that were loaded with hot keywords.

Anyway, this would cost real money, for sure. But the labor supply for fake friends is huge, especially if you don't care about proficiency in English.

Edit: For example, one could generate numerous fake Russian acquaintances of Edward Snowden, and have each buy a few fake friends. Maybe the fake friends would be impressed ;)


When someone suggests rolling their own crypto a bunch of people pop up to point out the considerable risks.

We need the same for these "wheat / chaff" steganographic noise generation schemes. Steganography is probably secure, but people tend to vastly underestimate the amount of cover material needed.

Considerable money and time has been devoted to "finding the real signal".

The Usenet keyword triggers were trivially easy to filter. Mostly because people just put a list, in all caps, at the end of their post.

We know these are trivially easy to filter because putting a list like [ECHELON CIA NSA GCHQ IRA BOMB inurl:groups.google.com] returns loads. Here's one example, with the handy words "anti echelon block" https://groups.google.com/forum/#!msg/uk.media/KqhWP1rLF9U/G...


Doesn't that mean that the real bad guy would just put that list in his messages?


I'm not suggesting that this would, by itself, hide anything. But if enough people did it, it could have an overall pro-privacy impact.


The whole thing reminds me of the main character's job in Her. Life imitates art.


Isn't spotting an AI algorithmically about as hard as building it?


Well, based on inspection of content, maybe. But we're not talking about mere words as an expression thought.

The simple fact is that within U.S. borders alone, there are maybe 300 million animals, each weighing in at probably 150 lbs (75kg) or more. They drive cars visible from space. They flip light switches on and off wherever they go. They clock in at work. They clock out. They walk in front of cameras, and there are cameras dotting every place of business, and in homes and outdoors. They plug things in and then unplug them. They power on machinery. They use credit card terminals. They visit ATMs. They buy things in cash, with serial numbered currency, which passes through more optical scanners than you might guess. And almost everyone speaks English, exclusively.

And then there are the mobile devices, the uniquely identifiable radio beacons we've, almost all of us, have volunteered to be tagged with.

So, VPNs, and proxies, and TOR, as much as they do, are only going to go so far.


Your assuming competence. The surveillance state is built by the lowest bidder and is far far less useful than generally assumed.

Just look into false positives on the no fly list. Sure, it's slightly better than random chance, but odds are if you pick an name from that list they are a non threat. (It's far worse than that you can probably pick several names before finding a real issue.)


The no-fly list can have three nines of accuracy and still have more false positives than actual hits.


Depends on how you define accuracy. Your thinking in terms of the test for the no fly list is X accurate, but that does not mean the no fly list it's self has X accuracy. List A, contains every person on the planet so it's comprehensive, but nobody would call that an accurate no fly list. List B contains one name of an terrorist who has made specific credible and recent threats, it is very accurate.


I think what's really being alluded to is that regardless of your accuracy (unless it's 0% or 100%), your false-positives will always be subject to your inputs.

Put another way, if you have no terrorists going through, then your false positive matches will always be greater than or equal to your positive matches, and in this case, equal only when they are both zero, since there would never be a real positive.

In a less extreme case, such as the reality we live in, we have many millions of non-terrorists and a few terrorists, and even with three nines of accuracy, it's likely your false positive matches far outweigh your real positive matches.


75kg is 165 lbs. 150 lbs is 68.2kg


No, it depends on how good the AI is. For example a really bad AI is really easy to spot


And the bad ais are the easy ones to develop. So "yes"?


The ones I feel for are the fake children of these relationships when they come to an end.



I like that a lot :)

Even now, it wouldn't be hard to pipe one fake friend to another. Add some delay and rate limiting. Route it through your Tor relay, VPN service, or whatever.


Sign me up for some fake kids!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: