Hacker News new | past | comments | ask | show | jobs | submit login

I don't know that I made any extrodinary claims.

You published a spreadsheet listing messaging apps 'ordered by security' in which Signal, Whatsapp and iMessage are shown as way less secure than IRC. It also still says, and you've repeatedly argued here, that things like whether, say, WhatsApp uses E2E encryption is essentially unknowable.




IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)

Notably if there was a blackbox audit of Whatsapp or iMessage 6 months ago you have no path to easily check if a blatant backdoor was introduced in the build you installed last night. You also can't know if there were obvious flaws in the code the whole time that would be very hard spot in blackbox testing if you didn't know what to look for. Maybe the app build from last night leaks its key intentionally via very subtle steganography in metadata?

Compare to a binary I installed via F-Droid that I can confirm was built reproducibly from a given git head I can go see the code review for.

To use a simple analogy: I can see the exact ingredients of what went into -my- meal instead of what went into the meal of the food inspector.

This allows much deeper release accountability and -is- a major security feature iMessage and Whatsapp lack worth flagging.

Verifying security with source code is hard enough. Without source code it is substantially harder and I for one have no interest in using or recommending security tools that fail to be 100% transparent about how they work.

Without source code all we have are claims that can never be thoroughly verified.


You keep making this argument and ignoring experts who tell you it is bogus. The whole thread is there for everyone to read; you can pretend you haven't read the rebuttals, but you can't pretend for everyone else. The idea that "without source code we can't thoroughly verify things" is false, and at odds with basically all of modern software security.


Oh, I read them. I simply firmly disagree with them and my personal experience of 15+ years finding and fixing security issues flys in the face of your statements. We clearly test software for bugs very differently.

I made specific arguments and use cases to justify my position and you have simply told me I am wrong without directly addressing them.

Once again, I find the term "expert" overrated. I for one admit I am not an expert on security, a field that is already hard enough on auditors like myself without withheld source code.

I have also worked with a half dozen or so security auditing firms all of which stated source code access would make their job much easier.

It didn't take me hours of blackbox testing for me to find CVE-2018-9057. It took me 20 minutes of reading code on Github half asleep at 4am because I was curious about an unrelated bug.

I remain convinced blackbox testing would of very probably never found that vuln, and even if it did, not in as short of a time period. I trust Terraform over closed alternatives because it was patched within a couple hours of me mentioning it on IRC by a peer who submitted the bug report and patch in one shot. I could verify the source code fix easily and compile the new binary myself to take advantage of the patch before Hashicorp even merged it.

I can also easily verify there are no regressions in future releases.

Tell me how you go about solving for this or other subtle cases like stego exfiltration more easily -without- source code. Also how you or your team could of patched the issue yourself without source code.

If I really solved this the hard way then I will by all means move my security engineering career to focus more on blackbox testing as you seem to be advocating for.


It sounds like you're arguing that you'd require source code to see whether something was using math/rand vs. crypto/rand, something that is literally evident in the control flow graph of a program. I do not doubt that source code makes it easier to review code when you're half-asleep at 4 in the morning.

For your particular example: go download a copy of radare and pull up any go build artifact and see for yourself how hard it is understand what's going on.

I don't know about your security engineering career, but if you intend to get serious about vulnerability research, yes, you should learn more about how researchers test shrink-wrap software. I spent years at a single stodgy midwest financial client doing IDA assessments to find vulnerabilities in everything from Windows management tools to storage appliances. It wasn't my IDA license; I was augmenting their in-house security team, which had 4 other licenses. This was in 2005.


The primary vulnerability here was that the author used the current time as the sole random seed for generating passwords. From there the output of a PRNG -looks- random, but is in fact based on something that happens every day and thus not really random.

If you can understand a PRNG algorithim and how it was seeded without source code using nothing but radare faster than I could read the code... then you really do have some superhuman skill, and most of my arguments fall flat.

Subtle cryprography flaws like this could be introduced intentionally as well by a bad actor, or pressure from a state actor. They are very hard to see without source code in my experience.

You just kind of made my point for me, in that seeing the output in something like radare -is- often much harder to understand what is going on than just looking at the source code.

Don't get me wrong, I have a deep respect for people that are very good at finding bugs this way. When you -don't have source code then finding bugs via methods like this is the only thing you have on the table, and it is impressive.

What I am taking issue with is you trying to in effect claim that some people like yourself are so good at blackbox testing that you could find all potential bugs faster with those tactics than you could reading the relevant source code.

Consider though that not all researchers work this way. Many bugs have been found by myself and other researchers I know by simply reading source code, so your argument that a vendor releasing source code gives it no security advantage is just not true.

While I am no fan of Signal, the fact they make their source code public makes it much easier to audit and trust its e2e cryptography implementation than say Whatsapp. Even the two tools you favor are wildly unequal in transparency and auditability.

Perhaps the majority of my background working with FOSS software has made me undervalue blackbox testing and you have made a good argument for it. It would make me more well rounded and I intend to pursue it.

I think if there is anything you can take from my side of this discussion it is seeing the value of providing source code to the right eyeballs that know how to quickly spot certian classes of issues.

That source code in the hands of the right person is a faster way to find some bugs than one could in a binary reverse engineering environment.


Oh for God's sake, dude. Write a Go program with a function that seeds math/rand from time.Time, compile it, and then load it into radare. "aa", then find the symbol for your function, then switch to disasm view and look at the function.

(The Terraform function you found this problem in is literally called "generatePassword", in case you planned on writing 6 paragraphs on how hard it is to find the right symbol to start analysis at.)

This is such a silly, marginal bug, it's bizarre to see you kibbitzing on the "right" way to fix it (the bug is that they're using math/rand to generate a password instead of crypto/rand, not that they're seeding math/rand from time). But whatever! Either way: it's clear as day from literally just the call graph of the program what is happening.

Your example of a bug that is hard to spot from disassembly is a terrible, silly example, that I've trivially refuted in just a couple of Unix shell commands.

I don't think you understand the arguing you're trying to have. I get it: you have a hard time looking for bugs in software. That's fine: auditing messengers is supposed to be hard. You don't have to be up to the task; others are.


IRC is lacking in features but setup correctly with modern OTR I stand by it having easy to reason about security advantages Whatsapp and iMessage do not. (Usability is another story)

The fact that you can replace OTR with OTP in this sort of statement and it becomes even truer should tell you what a lousy argument for the practical security of anything it is.


The apps listed in the spreadsheet are clearly sorted by number of features supported (with weights). I don't think OP is necessarily claiming IRC is more secure than Signal.


They are sorted largely by security, according to the author.

"It is currently roughly sorted by security. Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."

https://news.ycombinator.com/item?id=18233721


Oops, I guess I missed that comment. Thanks!


You are correct. I generalized this as "roughly sorted by security". Some in the list, like Tox, have notable design flaws, but this list is binary and does not account for implementation quality.

This is mostly for discovery of options to consider diving into.


He's not correct (as he has since acknowledged!), and you did, and still do, suggest that placement on the chart implies greater or lesser security. You literally included instructions on how to read and use the chart that make that point. "Usability is subjective and people can work down the list to the most secure tool they feel they can comfortably use."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: