Hacker News new | past | comments | ask | show | jobs | submit login
Pixel-perfect timing attacks with HTML5 (contextis.co.uk)
195 points by leetreveil on Aug 7, 2013 | hide | past | favorite | 46 comments



This is a fascinating attack. Definitely read the bits on the SVG filter timing attacks. They construct something that allows distinguishing black pixels from white pixels, apply a threshold filter to an iframe, and then read out pixels from the contents of that iframe.

Then they turn this around, set an iframe's src to "view-source:https://example.com/", and read out information from there (in a more efficient manner).


I love the way timing attacks seem so unlikely but actually easy ways to extract information.

Everything about this attack is beautiful. A serious of seemingly unrelated issues that don't appear like a problem from the outside but when combined produce a solid attack that you could roll out today.

Well worth reading through the whole article.


A lot of security issues start at one seemingly innocuous little toehold and then use, abuse, and combine the hell out of it to do surprising and obviously-undesirable things with it. That's what I find so beautiful about this sort of hack.


> set an iframe's src to "view-source:https://example.com/",

is it possible to frame view-source?


It used to be possible in Chrome; I'm not sure about Firefox or modern builds of Chrome.


Here's a test, with this markup:

http://jsfiddle.net/GEynT

  <h1>IFrame, normal</h1>
  <iframe src="http://www.example.com/"></iframe>
  <h1>IFrame, view-source</h1>
  <iframe src="view-source:http://www.example.com/"></iframe>
Chrome does not allow it and instead shows a blank frame. Firefox will show the view-source window inside the iframe (which probably /shouldn't/ be allowed). IE10, interestingly, loads that page, and then redirects you to view-source:http://www.example.com immediately.


The paper describes how to prevent the sniffing attack:

Website owners can protect themselves from the pixel reading attacks described in this paper by disallowing framing of their sites. This can be done by setting the following HTTP header:

X-Frame-Options: Deny

This header is primarily intended to prevent clickjacking attacks, but it is effective at mitigating any attack technique that involves a malicious site loading a victim site in an iframe. Any website that allows users to log in, or handles sensitive data should have this header set.

I wonder, why is this option an opt-out and not an opt-in? Shouldn't this be the default?


I know people who try to do interesting things for the users with iframes and are completely frustrated by things like that. File under "why we can't have nice things."


One issue with this is that some sites need to allow iframes from a whitelist of other sites (e.g. facebook apps). I'm working on a solution right now with the Referer header (which points to the container site on initial iframe load). This solution is complicated a little by navigation within the whitelisted iframe, but that should be fixable with cookies (e.g. an "original_referer" cookie).


For JS apps this can be done on client side in JS.

  if(top!=self) doNotFetchSensitiveInfoFromServer(true);


The default state for the web is usually "backward compatible".


These same guys had previously used WebGL to suck out text in the same way; unfortunately the demo is no longer at the same URL, but it is what's responsible for the fairly weird implementation of CSS Shaders: http://www.schemehostport.com/2011/12/timing-attacks-on-css-...

It's amazing that the same thing can be observed with the standard SVG software filters, though. I'd imagine that using X-Frame-Deny as they suggest is a much better solution than killing all JS (because you just know some incompetent ad network will manage to flip the switch and break millions of pages with that ability...).


Would X-Frame-Options:DENY work to mitigate the view-source: attack?


Just threw together a test case. X-Frame-Options does seem to mitigate the view-source attack: http://jsfiddle.net/GEynT/2/embedded/result/


To be clear, the hack is still possible without view-source. It just makes it easier and more generic of a solution.


For those, like me, wondering why that 'detect visited' hack doesn't simply bolden visited links or changes its font or font size and uses getComputedStyle or getBoundingClientRect [1] to see whether that changes the bounds of the element: that trick has been mitigated three years ago. See http://hacks.mozilla.org/2010/03/privacy-related-changes-com....

[1] not explicitly mentioned there, but I think the solution described intends to plug that hole, too.


These attacks are getting more and more creative. I begin to think that there is no such thing as perfect security in a world that constantly demands new features.


Don't think about security that way; that kind of logic is misleading. Security is measured in dollars, in the sense of cost imposed on attackers.

You're right that there's a constant tension between features and security.


Right. To be more precise, security is a set of costs on web developers, web viewers, and attackers. There is no obviously correct way to balance all three, once you start seeing feasible removed/missing features that would be genuinely useful tallied as costs.


Google "Users want and demand a rich computing experience." Back in the 90's a Microsoft person made that claim on comp.risks. It kind of became a joke to call every new attack a "rich computing experience."


There is no such thing as perfect security.


Sure there is. An application/computer doesn't has to be insecure to work. It's just very hard to make no mistakes in complex machines and software.


You can always turn the device off.

It really is a matter of features and how they're implemented.

Good luck picking a byte that can exploit a 7400.


If we're only talking about exploiting a device across a network, sure turn it off or disconnect it from the network. But there's more to security than that.

One can always take the device and turn it on for oneself.

If one can't exploit the device, one can resort to rubber-hose cryptanalysis.


We are only talking about exploiting a device across a network.


Then bridge the gap by infecting pendrives. That's how e.g. Stuxnet worked.


I don't understand what you're replying to. Physical access is a great way to bypass network security, but it has nothing to do with websites.


"Exploiting device across network".

Security doesn't exist in isolation. AKA. there's always a way.


If you assume a specific target, there is always a way to get to them.

If you're talking about making a browser secure against internet-based attacks, there is not always a way. This type of security is merely extremely, overwhelmingly difficult.


It seems to me like a web server ought to be able to send some signal to browsers on either a single page or subdomain basis, which disables JS for those pages. If another page includes such a JS-disabled page in an iframe, then at the very least, all scripts on the parent page should be immediately terminated, and ideally loading of the iframe should fail if any scripts have executed (obviously an exception should be made for, e.g. Chrome extensions).

This should completely nullify a vast number of potential attacks for sites that are particularly sensitive. There's no reason, for example, that the logged-in portion of a banking site should need to use JS. That seems like a reasonable sacrifice for adding significant security to critical websites.


> There's no reason, for example, that the logged-in portion of a banking site should need to use JS.

Said no one who has ever had to develop a decent web ui.


That's what I do professionally, and I can say with confidence that a banking website does not need to use JS. There is no functionality there that can't be done the old-fashioned way.

Online banking does not need to be a rich HTML5 experience, and online banking worked just as well as it does today before the modern trend of trying to make everything act like a desktop app.

Would developing the UI without using JS be harder? Yes, marginally. Is it worth opening up security vulnerabilities to make development slightly easier? No. Just in terms of how much each of those costs the bank, no. From the users' perspective, no.


Developing the UI without using JS wouldn't be harder, it would make the UI less usable. As a user of online banking software, I don't think it worked at all as well "before the modern trend of trying to make everything act like a desktop app", I think it totally sucked. Maybe that's a worthwhile security trade-off, but let's at least talk about the real trade-off.


Every online banking site I've used had a UI that could be entirely recreated without using JS, with literally no perceivable difference to the user. I have never once said "Man, I sure wish I could get to my balance without a new page loading when I clicked the link."


It sounds like it is our expectations rather than our experiences that differ wildly. I have many times said that I would like to be able to get to my balance without a new page loading. That and much more. Clearly, "new page loading" isn't really the metric I care about. I care about how quickly and un-frustratedly I can accomplish whatever I'm trying to accomplish. Anecdotally, and limiting the discussion to banking websites, I find the "new page loading" metric correlates with the "slow and frustrating" metric. I get the sense that you feel differently, and I don't think it's worthwhile for me to try to talk you into frustration or you to try to talk me out of it. You may be able to convince me that the frustration of going JS-free is worth the enhanced security, but as of now, there appear to be better solutions.


GMail, etc, is just as important from a security perspective as your banking site since it could be used to perform a password reset. It could conceivably be iframe'd and have its contents sucked out.

It's unlikely that every link in the chain will stop using JS, so we must develop more creative methods.

There's also a history attack in here based on observing a repaint due to a link changing color. So even if one did turn off JS due to some signal, oppressive regime X could still sniff if their subjects had visited website Y and do bad things to them. At this point tracking visited links seems like it's more trouble than it's worth!


>GMail, etc, is just as important from a security perspective as your banking site since it could be used to perform a password reset. It could conceivably be iframe'd and have its contents sucked out.

Now that is a good point. In general, I don't know what to do about the weak link of email, which goes far beyond sniffing. I think it's hard for people to properly respect the gravity of their email's security when the vast majority of what comes through it is basically frivolous, or at least security-noncritical.


A few modern form features like placeholder text, required, and input type-casting for email, tel, number, and date reduce the need for JavaScript if you're willing to let the experience degrade pretty drastically on legacy browsers.

On a side note, I found the clip art at the top of the white papers distracting. Formatting, spacing, class names such as "nav", value="passwo".


See https://www.pncvirtualwallet.com/tour/online-money-managemen... for helpful use of JS in online banking


Offhand I know that Chase.com uses JS in their online banking, almost for no reason too...


Ah! The opinion of someone who cares nothing about the functionality of the web as a whole.

JavaScript shouldn't be considered essential to use a website. If it is, the site has failed as it's job.


Not if you want to give the user decent affordances in the UI on non-HTML5 browsers.


That depends on what you mean. I _prefer_ UIs that have no Javascript on them, even since before HTML5.


I have a soft spot for side-channel attacks, they are often a beautiful example of out-of-the-box thinking. This whitepaper is no exception, in particular the second part about (ab)using the SVG filters.

I was thinking, of course it doesn't help much in mitigating this attack, but they calculate average rendering times over several repeats of the same operation. When profiling performance timings, it's usually much more accurate to take the minimum execution time. The constant timing that you want to measure is part of the low-bound on the total time, any random OS process/timing glitch is going to add to that total time, but it will not somehow make the timespan you are interested in randomly run faster. There might be some exceptions to this, though (in which case I'd go for a truncated median or percentile range average or something).

Also had some ideas to improve performance on the pixel-stealing, as well as the OCR-style character reading. With the latter one could use Bayesian probabilities instead of a strict decision tree, that way it'll be more resilient to accidental timing errors so you don't need to repeat as often to ensure that every pixel is correct, just keep reading out high-entropy pixels and adjust the probabilities until there is sufficient "belief" in a particular outcome.

But as I understand from the concluding paragraphs of this paper, these vulnerabilities are already patched or very much on the way to being patched, otherwise I'd love to have a play with this :) :)


That was the most interesting thing I've read in a while.


To mitigate the new detect visited vectors, browsers could render everything as unvisited and then asynchronously render a 'visited' overlay (in a separate framebuffer) at a later time. SVG filters will have to be processed twice for the visited-sensitive data, so a vendor may just wish to limit SVG filters to only processing the 'unvisited' framebuffer for the sake of performance.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: