Hacker News new | past | comments | ask | show | jobs | submit login

>... a few weeks earlier had canceled a security conference presentation on a low-cost way to deanonymize Tor users. The Tor officials went on to warn that an intelligence agency from a global adversary also might have been able to capitalize on the vulnerability.

This is kind of worrying. I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)

As to the CMU stuff... Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.

I wonder if there's a similarly worded pledge for this sort of thing. But at the same time, universities can do a lot of good security research that can, in the end, strengthen the systems we use.

The "$1 million to target these specific people" sounds dirty, but "$1 million to do research on the vulnerabilities of Tor"... well that sounds like research to me. Pretty tricky.




There's is no WPA2 alternative, this is it, this is the bleeding edge of Internet privacy algorithms. And since privacy is seen as a public enemy, a public sponsored attack is underway to weaken it, to the point where you can't really trust Tor for the type of world changing, nation state adversary, Snowden or Wikileaks level missions.

People needing a high level of protection can use and should use Tor in their workflow but they should not expect a one-click solution. On the other hand, it's perfectly adequate for day to day use of privacy minded individuals that are not targeted by active attacks.


> Tokyo University has this pledge to make sure basically no military research is done on campus, which I feel to be pretty laudable.

So, you move it off-campus. See e.g. the MIT Lincoln Lab, https://www.ll.mit.edu/


I'm at MIT proper and a good portion of our team's medical device work is DOD funded. While we are primarily designing devices to be used in civilian hospitals, our diagnostic devices could also potentially be used to optimize battlefield care for soldiers, which I personally think is great.

I think a wholesale ban on military research is pretty silly; the ethical implications of projects should be considered on a case by case basis by the university.


> the ethical implications of projects should be considered on a case by case basis by the university

How's that work during Vietnam when the DoD dangled bags of money in front of universities?


I can't speak to that as I wasn't alive then and have not researched the topic. Care to elaborate?


As someone who wasn't alive then either, it's a rather well documented period of history, albeit mostly in dead tree form. Karnow is probably the classic (http://www.amazon.com/gp/aw/d/0140265473/).

To use a more modern analogy that exists on the internet, the 2010 US military research / development / testing budget looks like it was around USD$80b.

Or in other terms, roughly equal to the total of all research spending by all other branches of the US government. (http://www.aaas.org/sites/default/files/RDGDP_1.jpg)

Now you're a university professor / dean / president. Times are hard (they always are, you're in academia). There's a huge pie sitting right next to the one you've been fighting over, and all you have to do is work on certain technologies that may or may not have lethal consequences.

I wouldn't take the bet on many people saying "No thanks, I'll be happy giving up grant money for moral reasons."


MIT Lincoln Laboratory is an FFRDC affiliated with MIT just as SEI is an FFRDC (of which CERT is a particular division) affiliated with CMU.


I believe this particular attack[0] has been fixed.

[0] https://blog.torproject.org/blog/tor-security-advisory-relay...


> I hope the Tor Project has information on the attack is looking into ways to mitigate this. But if it's due to the protocol nature, then maybe it's time to look for a successor (we aren't using WEP anymore, right?)

The attacks on Tor are largely in the form of:

A) Outright implementation flaws [e.g. Software bugs ]

B) Malicious actors deploying Tor nodes [e.g. On July 4 2014 we found a group of relays that we assume were trying to deanonymize users. They appear to have been targeting people who operate or access Tor hidden services. The attack involved modifying Tor protocol headers to do traffic confirmation attacks. https://blog.torproject.org/blog/tor-security-advisory-relay... ]

> A traffic confirmation attack is possible when the attacker controls or observes the relays on both ends of a Tor circuit and then compares traffic timing, volume, or other characteristics to conclude that the two relays are indeed on the same circuit. If the first relay in the circuit (called the "entry guard") knows the IP address of the user, and the last relay in the circuit knows the resource or destination she is accessing, then together they can deanonymize her. You can read more about traffic confirmation attacks, including pointers to many research papers, at this blog post from 2009:

Pretty much the only defense is to control the entry nodes you use yourself by:

https://www.torproject.org/docs/faq.html.en#EntryGuards

> Restricting your entry nodes may also help against attackers who want to run a few Tor nodes and easily enumerate all of the Tor user IP addresses. (Even though they can't learn what destinations the users are talking to, they still might be able to do bad things with just a list of users.) However, that feature won't really become useful until we move to a "directory guard" design as well.

Its an inherent problem with a low latency anonymity network that is really an open research problem.

However, controlling your entry nodes has a different problem:

1) It pretty clearly links you to you entering the tor network via a consistent sent of nodes.

2) Capturing these nodes via the DC and warrants/legal action has been done in the past. As any is going to be able to find these nodes since they are no longer randomly selected...

3) Once you are actively targeted you are just as vulnerable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: