Hacker News new | past | comments | ask | show | jobs | submit login

I audit tools myself, my employers, or partners consider and if I find obvious flaws then I file bugs for those products and generally we either don't use them or limit their use.

I feel qualified only to certify that a tool is obviously insecure, but I don't think -anyone- will ever be qualified to solo certify something as totally secure. (but sadly that is how clean audits are generally read). I have in the past hired 3-4 audit firms and each would find flaws the others did not, and that mine did not, but fail to spot flaws that mine did. No one has the full picture.

The only relevant one not under embargo atm was my recent casual 2-3 hour audit of Lifesize which I found right away had a number of alarming issues the company would not address. Some of these may of been found to be non issues under further inspection, but there was more than enough to assert security was not a major focus of the platform.

This cursory look was all it took for me to feel confident in ending any consideration of that product to protect the interests of the entity considering it.

I reported all these to the company and there was no response for over 90 days and made my findings public after due warning.

They are available here: https://gist.github.com/lrvick/6e600d8484cfb415d1e2b06e8b345...

(reminder: this was only 2-3 hours of work and by all means grain of salt)




I think you probably understand where I am coming from at this point. When I think about "audits" for secure messengers, I think about professional, specialized, contracted assessments with dedicated teams of experts (I'm a little biased, since that used to be my line of work). And my two objections to this discussion are:

1. I think LVH is right to point out that "audit" doesn't have much meaning in the document you've produced; it gloms together assessments of wildly differing depth and quality and condenses them all to a single pass/fail.

2. Secure messaging security is hard, much harder than secure transports (which you alluded to in the other subthread when you mentioned TLS). There are in fact not that many people in the world who can do a proper cryptographic assessment of a messenger at this point (not because it's prohibitively difficult; it should be well within the reach of everyone with a graduate degree in cryptography who enjoys coding --- rather, just because it's a specialized skill set that not many people have an opportunity to get good at). Like I said: I wouldn't say I'm qualified to perform such an assessment (LVH, different story). And all that "just" gets you the cryptography! If your messenger gets popular, the framework it's built on becomes one of the most important targets on the Internet!

So I get itchy when people say things that amount to "I've audited things, I have an authoritative opinion about this stuff". Probably not? Maybe, but, like, if we were going to place a bet...?

You can turn that logic around on me. But I'm not the one making an extraordinary claim. You are. So: what really bugs me is the idea that this is a problem ___domain you can reduce to a Wikipedia-style comparison chart. That kind of chart will get people hurt.


I don't know that I made any extrodinary claims.

You did ask me if I audited things before and I answered honestly. I frequently report vulnerabilities in a range of open and closed systems and read a lot about those others find. I also don't consider myself an expert and generally distrust people that claim they are in this space because it is, as you say, really hard.

I initially started a spreadsheet to document the very high level objective traits of messengers I find useful and others found it interesting/useful and helped extend it.

It is far too much cognitive overhead to do deep dive evaluation of 75+ messengers, but what this list allows one to do is quickly eliminate services from consideration that lack features vital to address their use cases, platform targets, or threat profile: for instance the ability to self host, or end to end encryption.

From there once you have your short list, you can then make more effective use of time reading code, doing audits, reviewing audits of others.

If a list like this simply makes someone aware of new up and coming projects in the space, or old ones that have been quietly evolving in capability... then I feel justified in having shared it instead of having kept it as a private document of my personal notes.

I for one learned about a number of new projects when putting this sheet together, and interesting new approaches on solving these problems.

The thing that is pretty subjective about the list is how I sorted items based on my own research and threat profile.

I won't sit here and say Matrix or briar or any other items near the top are perfect, or lacks any security flaws. I personally place my bets on Matrix at this point based on my deep dive evaluation of it and similarly featured alternatives, but that is subject to change! Others are free to share your view that a well funded but closed source messenger like whatsapp is the general best bet.

To sober up on this topic a bit: Matrix has had glaring security flaws in the past, but also so have the options you personally recommend like Signal and Whatsapp.

At the end of the day all one can do is collect data, look over all the options, deep dive into the relative merits and claims of each to the extent one can, look at the research provided by others, and make a judgement call.

This list should be considered a starting point for research and a way to discover lesser known projects as it was for me.

It should -not- be seen as an end all "use this it is most secure" recommendation by any means. Anyone that looks at any one high level list of binary data points and makes an exclusive choice on that alone is doomed to shoot themselves in the foot with or without my help.

Hopefully that addresses your concerns about my intent here.


I don't know that I made any extrodinary claims.

You published a spreadsheet listing messaging apps 'ordered by security' in which Signal, Whatsapp and iMessage are shown as way less secure than IRC. It also still says, and you've repeatedly argued here, that things like whether, say, WhatsApp uses E2E encryption is essentially unknowable.


IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)

Notably if there was a blackbox audit of Whatsapp or iMessage 6 months ago you have no path to easily check if a blatant backdoor was introduced in the build you installed last night. You also can't know if there were obvious flaws in the code the whole time that would be very hard spot in blackbox testing if you didn't know what to look for. Maybe the app build from last night leaks its key intentionally via very subtle steganography in metadata?

Compare to a binary I installed via F-Droid that I can confirm was built reproducibly from a given git head I can go see the code review for.

To use a simple analogy: I can see the exact ingredients of what went into -my- meal instead of what went into the meal of the food inspector.

This allows much deeper release accountability and -is- a major security feature iMessage and Whatsapp lack worth flagging.

Verifying security with source code is hard enough. Without source code it is substantially harder and I for one have no interest in using or recommending security tools that fail to be 100% transparent about how they work.

Without source code all we have are claims that can never be thoroughly verified.


You keep making this argument and ignoring experts who tell you it is bogus. The whole thread is there for everyone to read; you can pretend you haven't read the rebuttals, but you can't pretend for everyone else. The idea that "without source code we can't thoroughly verify things" is false, and at odds with basically all of modern software security.


Oh, I read them. I simply firmly disagree with them and my personal experience of 15+ years finding and fixing security issues flys in the face of your statements. We clearly test software for bugs very differently.

I made specific arguments and use cases to justify my position and you have simply told me I am wrong without directly addressing them.

Once again, I find the term "expert" overrated. I for one admit I am not an expert on security, a field that is already hard enough on auditors like myself without withheld source code.

I have also worked with a half dozen or so security auditing firms all of which stated source code access would make their job much easier.

It didn't take me hours of blackbox testing for me to find CVE-2018-9057. It took me 20 minutes of reading code on Github half asleep at 4am because I was curious about an unrelated bug.

I remain convinced blackbox testing would of very probably never found that vuln, and even if it did, not in as short of a time period. I trust Terraform over closed alternatives because it was patched within a couple hours of me mentioning it on IRC by a peer who submitted the bug report and patch in one shot. I could verify the source code fix easily and compile the new binary myself to take advantage of the patch before Hashicorp even merged it.

I can also easily verify there are no regressions in future releases.

Tell me how you go about solving for this or other subtle cases like stego exfiltration more easily -without- source code. Also how you or your team could of patched the issue yourself without source code.

If I really solved this the hard way then I will by all means move my security engineering career to focus more on blackbox testing as you seem to be advocating for.


It sounds like you're arguing that you'd require source code to see whether something was using math/rand vs. crypto/rand, something that is literally evident in the control flow graph of a program. I do not doubt that source code makes it easier to review code when you're half-asleep at 4 in the morning.

For your particular example: go download a copy of radare and pull up any go build artifact and see for yourself how hard it is understand what's going on.

I don't know about your security engineering career, but if you intend to get serious about vulnerability research, yes, you should learn more about how researchers test shrink-wrap software. I spent years at a single stodgy midwest financial client doing IDA assessments to find vulnerabilities in everything from Windows management tools to storage appliances. It wasn't my IDA license; I was augmenting their in-house security team, which had 4 other licenses. This was in 2005.


The primary vulnerability here was that the author used the current time as the sole random seed for generating passwords. From there the output of a PRNG -looks- random, but is in fact based on something that happens every day and thus not really random.

If you can understand a PRNG algorithim and how it was seeded without source code using nothing but radare faster than I could read the code... then you really do have some superhuman skill, and most of my arguments fall flat.

Subtle cryprography flaws like this could be introduced intentionally as well by a bad actor, or pressure from a state actor. They are very hard to see without source code in my experience.

You just kind of made my point for me, in that seeing the output in something like radare -is- often much harder to understand what is going on than just looking at the source code.

Don't get me wrong, I have a deep respect for people that are very good at finding bugs this way. When you -don't have source code then finding bugs via methods like this is the only thing you have on the table, and it is impressive.

What I am taking issue with is you trying to in effect claim that some people like yourself are so good at blackbox testing that you could find all potential bugs faster with those tactics than you could reading the relevant source code.

Consider though that not all researchers work this way. Many bugs have been found by myself and other researchers I know by simply reading source code, so your argument that a vendor releasing source code gives it no security advantage is just not true.

While I am no fan of Signal, the fact they make their source code public makes it much easier to audit and trust its e2e cryptography implementation than say Whatsapp. Even the two tools you favor are wildly unequal in transparency and auditability.

Perhaps the majority of my background working with FOSS software has made me undervalue blackbox testing and you have made a good argument for it. It would make me more well rounded and I intend to pursue it.

I think if there is anything you can take from my side of this discussion it is seeing the value of providing source code to the right eyeballs that know how to quickly spot certian classes of issues.

That source code in the hands of the right person is a faster way to find some bugs than one could in a binary reverse engineering environment.


Oh for God's sake, dude. Write a Go program with a function that seeds math/rand from time.Time, compile it, and then load it into radare. "aa", then find the symbol for your function, then switch to disasm view and look at the function.

(The Terraform function you found this problem in is literally called "generatePassword", in case you planned on writing 6 paragraphs on how hard it is to find the right symbol to start analysis at.)

This is such a silly, marginal bug, it's bizarre to see you kibbitzing on the "right" way to fix it (the bug is that they're using math/rand to generate a password instead of crypto/rand, not that they're seeding math/rand from time). But whatever! Either way: it's clear as day from literally just the call graph of the program what is happening.

Your example of a bug that is hard to spot from disassembly is a terrible, silly example, that I've trivially refuted in just a couple of Unix shell commands.

I don't think you understand the arguing you're trying to have. I get it: you have a hard time looking for bugs in software. That's fine: auditing messengers is supposed to be hard. You don't have to be up to the task; others are.


IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)

The fact that you can replace OTR with OTP in this sort of statement and it becomes even truer should tell you what a lousy argument for the practical security of anything it is.


The apps listed in the spreadsheet are clearly sorted by number of features supported (with weights). I don't think OP is necessarily claiming IRC is more secure than Signal.


They are sorted largely by security, according to the author.

"It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."

https://news.ycombinator.com/item?id=18233721


Oops, I guess I missed that comment. Thanks!


You are correct. I generalized this as "roughly sorted by security". Some in the list, like Tox, have notable design flaws, but this list is binary and does not account for implementation quality.

This is mostly for discovery of options to consider diving into.


He's not correct (as he has since acknowledged!), and you did, and still do, suggest that placement on the chart implies greater or lesser security. You literally included instructions on how to read and use the chart that make that point. "Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: