Hacker News new | past | comments | ask | show | jobs | submit login
Wide Impact: Highly Effective Gmail Phishing Technique Being Exploited (wordfence.com)
130 points by aburan28 on Jan 16, 2017 | hide | past | favorite | 46 comments



I agree with the article's suggestion[0] that 'data:text/HTML' should be changed to amber (or even red)... How often would a non-technical user need to access such a URI? Technical users (the same way they may test insecure sites), would be savvy to this for legitimate means.

This is pretty much a win-win Chrome hot-fix that could be rolled out asap.

What an excellent analysis of the user perception involved and its obvious remedy.

[0] > What Google needs to do in this case is change the way ‘data:text/html’ is displayed in the browser. There may be scenarios where this is safe, so they could use an amber color with a unique icon. That would alert our perception to a difference and we would examine it more closely.


> What Google needs to do in this case

And presumably also Microsoft, Apple, Mozilla, Opera...


Microsoft never had this issue: "For security reasons, data URIs are restricted to downloaded resources. Data URIs cannot be used for navigation, for scripting, or to populate frame or iframe elements." - https://msdn.microsoft.com/en-us/library/cc848897(v=vs.85).a... (also a note of http://caniuse.com/#search=datauri)

It's a bit weird from a security point of view that self-XSS is protected in Firefox & Chrome (https://bugzilla.mozilla.org/show_bug.cgi?id=994134 & https://bugs.chromium.org/p/chromium/issues/detail?id=345205), but navigation data-uris are not.


Another issue is that browsers do not display the non-secure http:// prefix in the url bar (which should probably be red and striked through).

As a PoC, I bought the ___domain https.is, and now I can construct urls like https.is//accounts.google.com - which can look convincing when glimpsed over.


This is why I disagree with the author's solution and find the Google employee's response fairly compelling:

> The data: URL part here is not that important as you could have a phishing on any http[s] page just as well.”

Calling out the use of data URIs doesn't solve the issue at all. I could just as easily register abc.xyz, pick up an SSL certificate, and send users to `https://abc.xyz//accounts.google.com/ServiceLogin?service=ma... or `https://abc.xyz/https://accounts.google.com/ServiceLogin?ser...

They get a green lock, and that certainly doesn't seem to require the user to overlook any more than the URI in question: `data:text/html,https://accounts.google.com/ServiceLogin?service=mail`

Anyone who treats the URI as an opaque string and simply scans for keywords (which is someone falling for the data: trick) is going to be vulnerable to a large variety of attacks, almost none of which the proposed solution solves.


"Changing your password every few months is good practice in general."

Stop saying that!

https://www.ftc.gov/news-events/blogs/techftc/2016/03/time-r...


I still stand by "change your passwords often".

You never know when somebody has access to your accounts. I learned the hard way that someone had access to my Facebook because they watched me type on the keyboard. Had I changed my password monthly, I would have kicked him out after 30 days. As it stand, that person had access to my account for at least a year if not more.


What the FTC link is advising against are corporate policies that require password rotation, because in practice it has been determined that this leads to users selecting even less secure passwords and/or writing down their passwords because they cannot remember them. If a user wants to voluntarily rotate their passwords, then that's in no way a problem as long as they aren't compromising password strength in the process.


You didn't have login alerts or approvals enabled? Those would've alerted you to the need for a password rotation instantly without needing to rotate complex passwords on a regular cadence.

If anything, I'd say your comment hardened my position against password rotation given how many mainstream sites with sensitive data expose extra security measures to their users. Take advantage of all of them!


You don't get login alerts if the person is using your wi-fi, a wi-fi where you once logged in (college, university, work...) or simply a computer you logged in one time (at that friend's place). That person could even disable them and you wouldn't be aware of it.


You can go on to facebook's privacy settings and disown previous logins. You are right that they don't let you manage it with enough specificity to prevent someone who's using the same IPV4 address and browser as you.


Anybody else seeing a certificate error on ftc.gov?



I'm running Chromium 54 on Arch Linux, which I am assuming is not affected, since that page only names version 53. Interestingly, there is no certificate problem on a Windows machine on the same network.


I think that the bug is related, the error is NET::ERR_CERTIFICATE_TRANSPARENCY_REQUIRED .

A quick search displays many info on the bug, you need to upgrade chromium.


A way to mitigate the attack is to use Google's Password Alert extension in Chrome [1]. If you enter your Google password on a non-Google ___domain, it immediately tells you that you've been phished and prompts you to change your password. I use this extension in conjunction with two-factor auth.

1. https://chrome.google.com/webstore/detail/password-alert/noo...


Are there legitimate use cases for 'data:...' URIs as clickable links? I understand these URIs can be useful for embedding resources directly into the HTML, e.g. images and icons. But as clickable links, I have only ever encountered them as a means to circumvent popup-blockers. Would it be reasonable for web browsers to offer an option for ignoring clicks on such links?


I used them for client-side generated file-downloads in the past.


I think they're also sometimes used to strip the referrer from links.


FYI Data URIs will have a badge "Not secure" displayed on the left in Chrome very soon. They're working on it now.

https://bugs.chromium.org/p/chromium/issues/detail?id=594215...


A big part of the danger of this hack is that it compromises email.

Email is a skeleton key for every other account you own. 2-factor auth can be a pain to use, but at the very least you should always use it to protect your email.


This article misses the jpg in the email which looks like the Gmail attachment preview widget, which I think is an important part of the phish.


I don't know if this is interesting or helpful, but I made a Chrome extension for helping test email information in Gmail. It would probably help in this instance to notify somewhat that something isn't correct.

I updated it after my initial upload to enable links (defaults to no links but can click a button to enable links). I just haven't gotten around to re-uploading the plugin. After some thought I definitely feel that just removing links altogether was too much. I will update the plugin after work.

PhishBlock Chrome plugin: https://chrome.google.com/webstore/detail/phishblock/mfigocg...


Does a password manager like 1Password catch that the URL is incorrect in these cases?


It would not get filled automatically and it wouldn't appear as a matching site. But you could still paste the password yourself.


No. How would it know what the correct one is? They would refuse to autofill it though, because the URL wouldn't match anything for any logins stored.


1Password will pop up a warning if the url you originally saved the credentials for is different when trying to fill in the form.


With two factor authentication it won't hurt you, even if your username and password gets compromised.


I think most "technical" users would have two-factor authentication enabled which would prevent this type of attack.


No - because the phishing page can act as a MITM attack - where they display the 2-factor login on the phishing page - and post the entered code to Google, confirm they are in (and receive the cookie enabling access) - while displaying the page back to you.

So 2-factor actually provides a false sense of security here.

Edit: unless you have U2F as per @makomk comment below


Unless the second factor is U2F, because the actual ___domain is handed to the U2F dongle by the browser and the authentication is tied to that.


Thanks - good point :)

But for the Google Authenticator and SMS - it would still be vulnerable.


What about getting a text message in your phone, a call, or the google mobile app? Will it be effective against this kind of attack?


> [...] which would prevent this type of attack.

Depends. See the discussion in the previous post about this: https://news.ycombinator.com/item?id=13372985


Are any advanced users on HN that would've overlooked the obvious signs in the address bar?

I mean, you don't have to know what the string 'data:text/html' means, because Google Chrome highlights the 'https' by coloring it green and they even show a 'secure' button right next to it, so the whole area looks fundamentally different.

IMHO only inexperienced users will fall for this. If you regularly look at the address bar before entering critical data into a website, you will get used to the overall look and most likely notice that something is out of order.


Pssh, I'll say it - I'd fall for this, more than 0% of the time. Am I an advanced user? I can try to give you an example of some client side TLS thing I have implemented and we can haggle over where the bar is for "advanced", but give me a Saturday night beer-riddled netflix binge and a midnight email check, I'm clicking this link.

I'd hope my 2FA would freak out, around that point, and save me from myself. I guess it would depend on the type of 2FA.


You'd probably notice because that page would not ask about the 2FA.


By default, Google remembers your device for a number of weeks and does not ask for 2FA multiple times on it.


I recently switched from Chrome to Firefox, and might have fallen for it since I'm not used to the address bar.

Heck, sometimes browsers come with an update that changes it's appearance.


But for the attackers it is an odds game.

While you probably wont fall for this 99.9% of the time - the 0.1% that someone "technical" does means the attacker will gain access.

All it takes is a moment of distraction, or you are tired, or in a rush etc...


This is crazy. It's 2017. Why are people STILL clicking links in their E-mail? Have people learned nothing? You don't have to be a "technical user" anymore know know that's a bad idea.

Hell, why do major E-mail clients even allow functional hyperlinks in E-mail? The major E-mail clients could 80% solve phishing overnight by just disabling links. They could probably solve a further 10% by disallowing copying things that look like URLs.

Sorry if this sounds like victim blaming, but at some point, after enough time, you have to eventually go from "victim" to "culpable".


I click the "unsubscribe" link all the time! Not to mention confirmation email links (much more convenient than entering a code they email me), package tracking links, and a whole slew of others.


I don't know if that's such a great security practice. I thought it was pretty common (at least among tech saavy crowd) to not click on Email links, but judging by the responses I guess not. Good luck and wear a helmet, everyone!


You've completely misread the point: it appears -- to the user -- to be a regular gmail attachment. In the attack described, they're intending to click on an attachment, not a link.


Links are pretty integral to the web. If it wasn't email, it would be Slack or social media. Mobile makes the problem worse because you click a big button which scrapes the link and thus can be gamed to look legitimate.

"Design for default-secure" ought to be UX rule #1.


So, there would never be a way to email someone a link to a site? Ever? Links are the whole point of having a Web!




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: