Hacker News new | past | comments | ask | show | jobs | submit login
After NSA's XKeyscore, Wikipedia Switches to HTTPS (fastcompany.com)
183 points by alecco on Aug 2, 2013 | hide | past | favorite | 114 comments



https://www.eff.org/https-everywhere

Switches to https automatically when visiting Wikipedia, even if you're not logged in.


They should switch to a better cipher suite to enable PFS[1] (ex: ECDHE-RSA-AES128-GCM-SHA256). Right now they're using RC4-SHA:

    $ echo | openssl s_client -debug  -connect en.wikipedia.org:443 | grep "Cipher is"  -A 4
    depth=1 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert High Assurance CA-3
    verify error:num=20:unable to get local issuer certificate
    verify return:0
    DONE
    New, TLSv1/SSLv3, Cipher is RC4-SHA
    Server public key is 2048 bit
    Secure Renegotiation IS supported
    Compression: NONE
    Expansion: NONE
[1]: http://en.wikipedia.org/wiki/Perfect_forward_secrecy


There are easier ways to reconstruct your HTTPS Wikipedia browsing habits than to crack HTTPS.

Because Wikipedia's content is public, the NSA can crawl the site repeatedly with all common user agents, generating the number of HTTPS bytes needed to download any given Wikipedia page. Then, simply by looking at the patterns of bits sent over the wire, they can trivially reconstruct the likely pages a user was viewing.

Wikipedia has not discussed any plans to mitigate traffic analysis; until they do so this whole exercise is futile, and I doubt Wikipedia will be able to obfuscate their site sufficiently to evade sophisticated traffic analysis.


Presumably you could make it a single page site where the page and server act like number stations so that the page always uses a fixed bandwidth on a tick, some of which is data.


Or you could just insert a random payload into the served content. I imagine that you would only need to add a small amount of variation to completely thwart the type of analysis that codex described.


They clearly mention this goal in their blog post:

http://blog.wikimedia.org/2013/08/01/future-https-wikimedia-...


Wikimedia seems to make a habit out of "considering" things for a long time before actually enabling them. Enabling forward secrecy is relatively straightforward.


At the scale of Wikimedia nothing is straightforward. Not even relatively.


This. We're talking about the fifth most popular site on the Web, operating on a budget smaller than my department at work.


Yes. However, if you do that, perhaps you're less motivated to find solutions to other, equally important problems with HTTPS.

  Enabling perfect forward secrecy is only useful if we also
  eliminate the threat of traffic analysis of HTTPS, which
  can be used to detect a user’s browsing activity, even
  when using HTTPS.


Traffic analysis is certainly an orthogonal problem to decryption of the ciphertext here. There's no reason to wait on one for the other.


Here's a sample nginx config that would prefer PFS over other key exchanges if the client supports it and is not vulnerable to the BEAST attack:

  ssl_ciphers EECDH+AES:EDH+AES:-SHA1:EECDH+RC4:EDH+RC4:RC4-SHA:EECDH+AES256:EDH+AES256:AES256-SHA:!aNULL:!eNULL:!EXP:!LOW:!MD5
Source: http://stackoverflow.com/questions/17308690/how-do-i-enable-...


Great, now let's also talk about auto-rotating keys.

I think we should assume the NSA is cutting Fiber. My fear is they would sniff that for keys.

If even one person fails to secure keys, we all lose. Hence why I think rotating keys are important.


Things like this are going to force a confrontation at some point. Either the existing programs for monitoring people are going to become progressively more useless as people switch to HTTPS for example, or the government will insist very forcefully to get access--getting private keys from certificate authorities, for example.


Well we effectively 'won' the last major confrontation (crypto code) so a confrontation is not bad per se.

However this does mean that systems like PRISM and phone metadata will simply become more important as 'upstream wiretaps' go away, and the NSA will surely have other tricks up their sleeves as well.

Of course, anything we can do to make their surveillance efforts require manual intervention (e.g. having to attack an ownCloud installation from within a rented system in the same datacenter) makes those efforts less of a threat to each of us than a completely automated tracking of anyone they wish.


And more worrying than even that, with the top four browser vendors all US-based, would pressure be put on them to not remove the root certificates? That, IMO, is more worrying than government interference in CAs: the system is designed to work around government interference in CAs (by removing the CA root as trusted), but isn't so capable at dealing with government interference of trusted roots.


Due to Firefox and Chromium both being open source, this won't be an issue. If the browser companies remove the ability to remove root certs, we can just fork and add it back.


This isn't a technological issue, it's a political issue. It requires a political solution, since laws can be enacted to make what you're proposing illegal.

The hacker mantra is indeed "There is a key to every lock" but what happens when 1) you unlock a door, 2) they know you unlocked that door, and 3) it's illegal to unlock that door?

Answer: Then they put you in prison.


I'd love to see that headline: Hacker Jailed for Not Updating Web Browser


It would read like this:

A hacker charged with changing YOUR Internet Browser, potentially making the entire country less secure in the face of terrorists, has been found guilty of crimes against the state.


Exactly. Let's not pretend they aren't experts at framing false narratives.

Actually, that's pretty much the #1 prerequisite for the job of a being a politician...


How about Hacker Jailed For Shipping Web Browser With Secure Encryption? That sounds crazy, no? Yet the US has limited availability of encryption before, so why should it not do so again?


What law would be so dumb as to force an American browser to include malicious CAs, publically, and not also force everyone to only use those American browsers?

In your imagined dystopia what you're talking about doing (using a browser with safe CAs) would be illegal no matter where the browser really comes from.


It's no different with locked-down phones/tablets where it's illegal to root it. You don't need to force everyone, just people using American ISPs/carriers.


That's not the issue. The issue is the majority of browser users will use the default set of root certificates. Forking and removing them is the least of the difficulties; getting people to use the fork is the problem.


but not my problem...


Oh, sure, some may know enough and care enough to do something about it, change browser, change root certs — whatever is needed. But this isn't about them, this is about society at large. This is about whether your mother, your father can use their webmail account without being spied on: would you want all your emails to and from them available because of a MITM attack on someone who is not you? (I realise email is not a great example, with emails typically being transmitted in plaintext between SMTP servers, but is reasonable in the generic digital communication sense.)


If you live in a police state where everyone you know is being spied on my the government, that's sounds kinda sucky.


Relevant infrastructure seems less likely to be developed and maintained if an insignificant number of people use it.


This is why HTTPS certificate pinning exists. Having a standardized way of requesting a pin would be a very good thing right now. (See: HPKP and TACK)


Isn't this how the march of technology always works?

Governments knew how to tap analog phone lines well, and when cell phones became popular they had to adapt methods.

Governments knew how to capture or analyze mail traffic, and when the Internet became popular they had to adapt methods.


Seems like the NSA could still easily fingerprint some users:

1) Find the first edit made by a user

2) Search for IP sockets with a spike in outbound traffic to Wikipedia at around the same time (editing a large section submits a large POST).

3) Follow the users' further edits, and do the same as above to keep narrowing the candidate IPs down.


I don't think they're so worried about edits. Wikipedia is a go-to source for quick lookups of information.

I work in a nuclear physics lab; if the NSA is watching, I'm sure that some of my searches have triggered flags. Doesn't help that the physics community is small, and everyone is only a few degrees of separation apart.


The NSA could probably identify anyone on the internet given enough time and resources. I think the point of this is to make mass surveillance difficult, so they'll only target people flagged as suspicious instead of everyone.


IIRC, I read elsewhere that they can only store data for the last 24 hours. Simply, because so much data comes through the pipes.

So, unless they already had decided to keep an eye on you, your traffic would probably have gone unnoticed. Switching to a VPN now or something would keep you relatively anonymous.


On the other hand, switching to a VPN would make them flag your data (encrypted) and retain it longer (though you'd probably still be fine).


> IIRC, I read elsewhere that they can only store data for the last 24 hours.

the XKeyScore slides are 5 years old. who knows what they can do now.


I noticed that if DuckDuckGo returns a link to a Wikipedia article, it always seems to be an https URL. With Google, Wikipedia links seems to vary between http and https.


DDG defaults to HTTPS links wherever possible; its like having HTTPS-everywhere in your search engine.


That's one of important virtues of DDG for me, I think knowing this will help me switch.


Not yet for en.m.wikipedia.org.


I sent them a proposal to change that. :-)


+1 to DDG for this.


Is TLS really secure from the NSA?


Against mass surveillance, yes.

But if you become person of interest, there is strong possibility that they can do man-in-the-middle attack very easily (with certificates that don't give any alarms). They probably have stuff in major network hubs that can divert traffic trough their servers.

Remember how easily Nokia, Opera and Amazon are able to do MITM attack against phone users by running it trough SSL proxy (I think Nokia has stopped doing this). https://www.schneier.com/blog/archives/2013/01/man-in-the-mi...


But that's when using the Nokia, Opera, or Amazon browser. If you're worried about Nokia, Opera, and Amazon facilitating MITM attacks, they could also just program the browser with a secret NSA certificate authority.


>they could also just program the browser with a secret NSA certificate authority.

I strongly suspect that NSA don't have to do that kind of stuff that can be easily noticed. They just ask nicely from Symantec/Verisign to give them valid certificate. Or they already have common root certificates.


And if MITM attack isn't sufficient, they have the tools to simply penetrate your machine and/or the server.

I agree with your main point.


I'm not a security guy, but it seems to me that it would also be useful to mask the URL. It's my understanding that a snooper could still see that you accessed https://en.wikipedia.org/wiki/Tiananmen_Square_protests_of_1... , but not the content of the page.

Maybe offer a search on the site that returns links that are generated just for you, so instead of going to the above url, you'd access something like https://en.wikipedia.org/wiki/onetime/45sdf3sd8re2dfa7w7eras... (and throw away the key after the access).


At a really rough approximation, this is how an SSL pageview works:

1. DNS query in plaintext for en.wikipedia.org.

2. Open a connection to the resultant IP (the fact that you connect to this IP is trace-able).

3. Do an SSL handshake.

4. HTTP protocol stuff, including transmission of PATH_INFO, happens on the encrypted channel.

5. Server responds on encrypted channel.


Also worthwhile pointing out that if you have a local DNS cache (you almost certainly do), and if there are several hosts sharing a IP, given a cache hit, the adversary will only know the connection is to one of a set of hostnames (those you have previously requested and for whom the cache is still valid) or the IP itself.


Not with a modern browser with SNI. This transmits the host name in the initial unencrypted portion of the SSL connection.


Gotcha, thanks for the explanation.


Actually, there is quite a bit one can derive just by timing the requests and payload size. Given that the size of wikipedia articles at any given time can be calculated, as well as articles they link to, it's quite possible one could reconstruct a given wikipedia browsing session using metadata alone.


In turn, wiki could inject random packets of data into the payload to obfuscate the trail.


A snooper could only see you accessed en.wikipedia.org


I've been using HTTPS Wikipedia for years. It's great to see them make it the default.

But considering what Wikipedia is, users wanting increased privacy could just download a copy of the encyclopedia and do their searches offline. Wikimedia makes data dumps of their user-generated content (UGC) available to the public. (Don't you wish all mega-websites relying on UGC did that?)

There was a time before the internet when we used volumes of paper bound encyclopedias. These were not written by laypeople and they were not free. Few people owned their own set of volumes of Brittanica's encyclopedia. They used someone else's copy, e.g., a library's.

But imagine if Brittanica offered _free_ copies of their encyclopedia that could somehow fit in your pocket (as is possible now through digitization and Wikipedia).

Would you continue to use a copy belonging to someone else everytime you had to look something up? Why wouldn't you obtain a copy for yourself?

What if... Wikipedia's data dumps were small enough. Wikipedia content was, overall, static enough. Storage was cheap enough. Download speeds were fast enough. And you could get your very own copy of the encyclopedia.

Compared to the speed, reliability and privacy of offline reading, grabbing specific articles piecemeal via HTTPS simply cannot compare.

See OpenMoko's WikiReader as an example implementation. It's on Github.


The only possible disadvantage is information about rapidly changing world events (for which Wikipedia isn't the best resource, but still). English Wikipedia dumps are only run on a monthly basis.

Images are much more resource intensive, but if text only is sufficient then the average user can download the compressed Wikipedia dumps in less than 2 days.

Please use torrents to reduce the load on Wikimedia's servers and to increase download speed: https://meta.wikimedia.org/wiki/Data_dump_torrents#enwiki

I keep a backup on an external hard-drive, in case of apocalypse or censorship (same thing). Nobody can take away my list of Scrubs episodes.


I bought a 70$ tablet just as my hitchhiker's guide. Put wikipedia en, fr and es on a SD card plus various other smaller wikis. I recommend http://aarddict.org/ as FOSS win/Linux/Android/etc reader.

The only problem is that images need massive space. I hope some technological advances will enable us to include all of them in wiki dumps soon.



How helpful would this actually be? If some semi-omnipotent entity were to observe the https traffic, could deductions be made about the series of web pages visited/information sought by comparing the sizes of traffic to the known sizes of wikipedia pages?


Could Wikipedia serve pages with a random number of dummy bytes inserted (like within a comment) to prevent this?


Well, considering that the vast majority of its users have already signed up, and that information is already stored, their accounts can already be researched. Same goes for most sites which implement a "new" security scheme.


> Well, considering that the vast majority of its users have already signed up

Do you have any data for that? I highly doubt it.


Let's see, the data is that all the users that have signed up before they changed their security, have signed up before they changed their security. Since they changed it very recently, but have been around for years, I think it's a safe bet that my statement is correct.


I highly doubt the number of users with accounts constitutes a majority of wikipedia users


Fine, what I really meant to say was "accounts", not "users". In my framework, I only consider "users" those who signed up, so I usually refer to accounts as "users", but technically this may be incorrect.


I wouldn't be surprised if 99% of the people who have ever used WP never created an account.


There are 20 million accounts, 125 thousand active accounts [1]. The number of people who refer to Wikipedia is probably over a billion, since it's often the first hit in Google (speculation). Roughly 25% of edits are anonymous (no account), and anonymous edits are significantly longer than non-anonymous ones [2].

[1] https://en.wikipedia.org/wiki/Wikipedia:Statistics

[2] https://meta.wikimedia.org/wiki/Research:Anonymous_edits



Just in time for the BREACH exploit. http://arstechnica.com/security/2013/08/gone-in-30-seconds-n...

Time to get the TLS hangups sorted and upgrade to 1.2 ?!


> secure HTTPS

sigh


How many times have you used the "ATM machine"? ;)


Hey guys, I can't remember my PIN number.



*Will be switching to HTTPS...

I just went to wikipedia and the implementation is not live yet.


you can just go to https://en.wikipedia.org and it works. We just do not have automatic redirection yet.


You could always do that..


Can they afford to? What was stopping them before?


It's not exactly a basement operation. I mean WikiMedia is basically one of the busiest destinations on the web running on donations alone, so even if the majority of sites aren't affected that much by switching to HTTPS, at the volumes they're serving, it must have a pretty significant impact.


I posted a patch to change Firefox's Wikipedia search bar from HTTP to HTTPS, but Wikipedia developers said their infrastructure was not ready for load. Maybe they will be more receptive now. I'll ping them again. :)

  "We're still waiting on some core changes in MediaWiki to switch logged-in users
  to HTTPS. I'd prefer to have that happen first."
https://bugzilla.mozilla.org/show_bug.cgi?id=758857

https://bugzilla.wikimedia.org/show_bug.cgi?id=45765


Yeah that's what I'm thinking as well, this isn't a switch-flip, I hope this doesn't impact their operating cost too much. :(


They may very well see a significant spike though.

This is why people should donate as much as possible before the "drive banners" come up. It doesn't have to be a whole lot; if most of the people who use it give even a small sum, considering the volume of visitors, they could still keep up with operations costs pretty well.


> What was stopping them before?

That's the question we should all be asking.

Mallory has always been out there. NSA is certainly a member of Mallory, but is not the only member.

So while it's nice that NSA is bringing attention to the idea of securing your communications, it shouldn't have had to come to this. It's been recommend practice for years now to use TLS for everything unless there were good reasons not to, to the point that it's baked-in to SPDY.


The good news in all of this is that people are realizing that it's not that hard to be more secure with their data. The information is getting out there about how to be secure, at least.

Maybe not against the NSA, but against your everyday problems people are going to be better prepared.


It takes some getting used to even the idea that something like reading or editing an encyclopaedia might be activity that we should care all that much about securing.

(This whether or not your goal with TLS for it is securing that site or increasing the overall volume of encrypted data)


Why would it take some time? It's the exact same logic as 'privacy is important even if I have done nothing wrong' that we keep pointing out in response to those who say that "you have nothing to fear unless you have something to hide".


Because it's not true that everything we do has to be private. It just doesn't mesh with how people think of privacy.


We tend to expect online privacy to match our physical understanding of it. If I went to the library and used the card catalog to locate articles about pressure cookers and backpacks in an encyclopedia, I wouldn't expect the librarians to tell the police, and I wouldn't expect the police to come and knock on my door to ask me about it.


Interestingly enough, that's an incorrect expectation[0], which makes it a great example of how poorly understood privacy rights actually are, both in physical space and the virtual space.

[0] http://www.ala.org/Template.cfm?Section=ifissues&Template=/C...


Our expectations of the physical world derive from our experience of the physical world; they're valid notions in that sense.

How would the FBI monitor me using a card catalog and getting a book off the shelf? With a camera? If it was a camera, it's still not what we expect from the physical world, because we don't physically expect that we are being watched unless there are actually eyes staring at us.

If there was an FBI agent watching me look up things from a card catalog, and another one watching me read the books, this would match my physical understanding of a lack of privacy.


I talk about this a bit in another comment; the analogy is breaking down because we're equating active surveillance (guys in suits watching you) with passive data collection (cameras in the library that catch you carrying around a book on pressure cookers).

In a public library, it is not a reasonable person's expectation that the books that person is selecting are not going to be seen by others in the library. A reasonable person would not expect privacy in a public library, because it is, by definition, public.

Facebook isn't public, though. Expectations of privacy change here, and that's where we get into murky waters - we have no meatspace equivalent to Facebook or Google, so we don't really know what to expect.


> A reasonable person would not expect privacy in a public library, because it is, by definition, public.

A library is public not because there is a lack of privacy, but because it owned by the state. Do you not have privacy in a public restroom? The same arguments here about the physical expectations of privacy apply to private libraries in private universities.

Yes, when I go to the library, I notice from time to time the covers of books or magazines that other people are reading. However, in retrospect I cannot remember a single instance of who read what book, not their names if I know them nor their faces. I'm a pretty observant person with a good memory, so I have the same reciprocal expectations of others. If I were to walk up to somebody and straight up ask them what book they were reading, or to start reading over their shoulder, that would be considered an invasion of privacy.

Of course I don't have any way to prove this, but I would be astonished if a stranger in the city that's only ever seen me in the library could tell you what I've taken off the shelf.

Facebook is kind of like posting things on bulletin boards. Email is like sending postcards. Google is the aforementioned card catalog.


That's a fair point about the public library, but it is still a public place.

The standard of expectation of privacy is set by society[0], not you specifically. Even if you don't write down the books other people are carrying around, it's not illegal or even morally wrong to do so, nor would it be an invasion of privacy to ask a person what book he/she was reading. Not sure about literally reading over their shoulder, but that's not what we're talking about.

There is zero legal precedence for this idea of "anonymous in a crowd" that I do see every now and then when talking about privacy. I am legally allowed to take as many pictures of public places and people in public places as I want, so while a person might not be able to directly recall your face, I could easily and legally take your photograph in a public space and thus track your movements this way.

[0] http://en.wikipedia.org/wiki/Expectation_of_privacy#Overview


Oh, yeah, I was talking about subjective expectations of privacy. I didn't realize you were arguing a legal perspective. When I say expectations, I don't mean societal rights, I just mean what I expect to happen. I think they derive largely from how we experience the world physically, and I think that they translate poorly to online privacy. In particular, we usually consider conversations between two people to be private, but two computers talking to each other is quite often public.


Absolutely. It's one of the hardest parts of all of this - translating physical space expectations to the Internet.

Congress should draft up some kind of "rules of engagement" for the Internet, explicitly setting expectations. We know we're going to get searched at the airport, but we have no clue what the US government is going to do to us when we're online. That would be friggin' helpful to know.


Nitpicker point: It's been a while since I used a CC, but used to a lot, and the user stands pretty close to the cards as they read them (specially for the lower drawers). A drone might have a chance, but not a FIB-cam.


Everything doesn't need to be private, but we expect it to be, just like we expect letters sent in the mail to be.

It's not like Wikipedia is shifting to HTTPS because people reading about the "American Revoltionary War" are going to get flagged for rendition. That HTTP session didn't need to be private.

Wikipedia is shifting because they don't want NSA snooping with any of their users' traffic. But if they didn't want NSA snooping then certainly they didn't want ISPs, hackers on the same cable modem loop, etc. snooping, so they could/should have done this switch awhile ago.

Even should we change the law to match our expectations regarding Internet privacy, https is still a better idea as the NSA is really the least of worries for the vast majority of us who have bigger threats with organized crime from Eastern Europe, spammers forming botnets, etc.


Watch for a drop in China traffic, HTTPS Wikipedia is blocked but HTTP is not.


Considering how much I use WP, they're gonna need a terabyte just for me.


Why just for logged-in users?


Bitcoin concept disrupts the banking industry and effectively deprecates it. I'd like to see a similar decentralized system in application to SSL certificates, where anyone can establish secure connection without paying fees to 3rd party CAs.


Don't be ridiculous. Bitcoin is little else than a reliable and decentralized transaction network, and a currency.

Banks are connections between customers that have deficits and customers that have surpluses, ie: credit and lending. There are also investment structures that people use, ie: stocks and bonds.

In addition to consumer banking and investment banking, there is also insurance functions.

There are companies that meet individual needs (ie: companies that do only insurance) but many consumers still prefer to utilize "one-stop shop" banking institutions that accommodate a variety of needs.


This is ridiculous. Wikipedia is headquartered in San Francisco. If the NSA wanted to snoop Wikipedia lookups it would force Wikipedia to install PRISM-like access devices to the site itself, secretly. Switching to HTTPS consumes more resources all around, increases latency, increases site operation costs, and emits more climate changing CO2, with no net change in the NSA's capabilities to snoop Wikipedia lookups.


It's not ridiculous. First, it requires the NSA to actually do what you propose, as opposed to just reading broadly all traffic going through. That is a big difference because it gives the possibility of Wikipedia fighting back. Jimmy Wales has suggested that if he's ever given a FISA gag order, he might disobey it: https://twitter.com/jimmy_wales/status/362596285469044737)


He might disobey a personal gag order, but I can assure you that as long as PRISM exists, U.S. companies will comply if targeted.

There are also plenty of other attack vectors for the NSA even with HTTPS; obtaining the private keys of the common certification authorities is perhaps the most straightforward.


Why would HTTPS necessarily cause more CO2 emissions? Wikipedia might be headquartered in San Francisco, but their data centers are not. For example, their European data center is CO2-neutral. [1]

[1] http://www.thewhir.com/web-hosting-news/evoswitch-hosts-the-...


Their US datacenter is in Florida.

EDIT: My info is out of date. Their Florida datacenter has become the backup, with the primary being Equinix in Ashburn, Virginia:

http://www.datacenterknowledge.com/archives/2013/01/14/its-o...


More computational load = more CO2. Even if they offset CO2 emissions, that money could have been spent more efficiently by not running HTTPS.


NSA isn't the only party interested in wikipedia habbits. Other adversaries in other countries also like to listen.


"If the NSA wanted to snoop Wikipedia lookups it would force Wikipedia to install PRISM-like access devices to the site itself, secretly."

Wikimedia employee here. FYI, we're headquartered in SF, but we have no datacenter here. All traffic goes through our datacenters in FL, VA, and Amsterdam.


So all of your datacenters are in the U.S. and in Holland? State surveillance in the Netherlands is even worse than in the U.S. [1]

[1] http://www.bit-byters.net/2009/11/netherlands-still-1-in-pho...


It may not make much of a difference to you, but I should point out that our colo in the Netherlands is mostly used for a Squid cache cluster, and is not a primary datacenter.

Also, the citation you give is about phone tapping, not the kind of NSA-style online surveillance we're talking about trying to protect against with HTTPS support.

IMO, in general you shouldn't consider your activity over HTTP private regardless of whether you trust a particular government or not. Even some random individual running Firesheep in your local cafe is a threat in such a case.

About the larger point: even with the fully craptastic information we've learned about NSA snooping, there are huge advantages for Wikipedia users to have our organization and Wikipedia data hosted in the United States. As one big one: I'm not sure Wikipedia could survive if it wasn't for Section 230 of the Communications Decency Act.[1]

1. https://www.eff.org/deeplinks/2013/07/cda-230-success-cases-...


The symbol effect is still valuable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: