Hacker News new | past | comments | ask | show | jobs | submit login
"Mixed content warning", how I loathe thee (37signals.com)
26 points by astrec on Nov 27, 2008 | hide | past | favorite | 24 comments



No. There are two categories of developers: one understands why this is a warning-worthy situation and would never do it, and the other, that shouldn't be trusted to do the right thing and this message is for. It saddens me that a site as reputable as 37signals would come down on this side of the issue.

When people see the padlock icon they expect uncompromised security. (that's not something we can provide, strictly speaking, but that's the expectation, just the same) There are innumerable attack vectors that letting part of the traffic avoid SSL and cache controls would enable. Here's what I can think off right off of the top of my head:

1) Unencrypted HTTP snooping Referral header info: base.css being requested by https://secure.lousy.gov/cgi-bin/home.pl?ssn=123-45-6789

2) simple http/https confusion or misconfiguration: http(no-s)://secure.lousy.gov/tempdir/nonce123456/taxes.pdf -- generated by a backend process on the server -- this could be an over-the-wire or cache issue

3) Assorted javascript injections. This would be a sweet malware attack vector, actually. Once you got people to download your .exe you could tend browser caches and update bank javascripts to download 1x1px images with encoded bank information from hostilesite.com

Long story short, just like with certificate warning issues, when the argument comes down to "this security error is too confusing for $target_audience, can't we just turn it off?" the answer is always no.


The referrer information disclosure is a very good point. In general, I kind of wish that browsers did not include Referrer headers when going from https page to a page on a different ___domain (e.g. https://wiki.yourcompany.com/competitors/Plan_To_Squash_Foob... -> http://www.foobar.com/pricing)


Browsers don't send the referrer header if the referrer URL is HTTPS and the destination is HTTP.


>Note: There’s a reasonable argument for warning on JavaScript includes as man-in-the-middle attacks can do nasty things, but that’s not true for CSS (on anything but IE) or images

Changing an image on a button from "reply to this quip from your friend!" to "submit password to re-login" isn't a security risk?


This is exactly the issue. Once some content on the page is not SSL, you can't trust what you're seeing.

Of course, it's easy to fix with an extension to HTML, something like hrefhash="DSKJsdfjsdfs234" so the browser can verify that the referenced resource hasn't been tampered with. Then you can have your non-private data served over HTTP, and cached by the browser and by proxies, and you don't need the extra server side computation of SSL.


I think browsers should respect HTTP's caching headers, even if the content is served over SSL. If the server says "hey, it's OK to cache this", I don't think the browser should second-guess that.

If you want to be ultra-secure, get an extension for your browser that doesn't cache anything.


I agree. But extending HTML to use a hash for verification has additional benefits. One - proxies can see plain HTTP, but they can't cache HTTPS (I think). Two - you can share the same images for the HTTP and HTTPS pages on your site. And three - you can point to a resource (e.g. a Javascript library, a download link, whatever) that you don't control, but you can now trust that the other site won't change it to something malicious.


Just thought of another benefit - if the referenced resource is cached, the browser can save an extra HTTP request that it would have sent to check if the referenced data has been modified. Now it knows, given the hash, that it must be unchanged.


All good points. I like your idea.

I assume most people have frameworks that generate their <script src=...> tags for them, so this shouldn't even require developers to think about it.


I had a similar idea a while ago, and put the results of my brainstorming here: http://eiman.tv/misc/http-content-signature.html

The idea is to add a signed hash of the content in the HTTP headers, along with a way of verifying the signature. The point of it all is to make it possible to verify the source and integrity of content served from non-HTTPS servers.

Feedback welcome!


You could specify that you expect the referenced resource to be served with a given ETag. Would that be good enough?


ETags can be arbitrary, the server can put whatever it wants.


Ah, yes. An attacker could fetch the resource themselves, discover the ETag and serve their malicious resource with the real ETag. Sorry.


Also, CSS in IE can contain JavaScript. I'm not sure how much access it has to the DOM though.


I can't believe such an uninformed and dangerous post is sitting in public, implicitly endorsed by 37signals.

From the URL it looks like may be getting pulled out of an svn repo or some other internal resource. Nonetheless, they need to get rid of this, because I certainly would have hestitation in using their software if they do not understand such fundamental aspects of security as this.

Quite apart from the dangerous notion of mixing SSL and non-SSL content, even the part about caching is just plain wrong. Recent browsers will cache SSL resources to disk if you send appropriate cache-control headers (this was one of the huge issues fixed in FF3.0).


The "svn" in the url is a reference to the name of the blog: "Signal vs. Noise". It is very much in public, the second link on their front page goes to it.


I agree it's a pain, although I assume one of the other reasons would be a badly programmed website that does something silly like

myImage.src = "http://www.blah.com/foo.png?username="+username+"&password="+password;

The user is under the impression they are secure, and their data is protected from prying eyes, but then the website has gone and given their username and password out over a non secure channel.

So it's not just js/css that could cause leakage/security issues, but any http request.

This is of course the fault of the website for being sloppy with the users private data, but I can see the argument that says users should be made aware that there is a possibility of sensitive data being "leaked".

The issue is: Who do you want to be the authority on what data is "boring generic/doesn't matter", and what data is "sensitive"... The website? or the user?


>> myImage.src = "http://www.blah.com/foo.png?username="+username+"&password="+password;

No encryption (or technology even) of any kind could protect against that kind of stupidity.


No encryption (or technology even) of any kind could protect against that kind of stupidity.

Seriously. You can't just interpolate variables into URLs without escaping :)


Well at the moment, some browsers will warn you when this code executes, as it's loading http, from an https page.

Yes it's an extreme example, but there are likely other examples where the data isn't quite so sensitive.


It's too bad that a company with such prominence as 37 Signals doesn't know more about why IE is doing exactly what it should be doing in this case and that Safari is actually a problem. If IE was stricter in a lot more cases, we'd have fewer "Well, it works in IE, it must be good" problems overall.

Caching issues aside, few people know that the protocol is actually optional and is inherited from the base url (just like the origin/server name portion is). So if your entire site can be served by HTTPS, you can still use relative paths anchored at different server names. On

   https://www.example.com/something
the following relative paths will also be requested via HTTPS:

   /somethingelse
   //www.example.com/media/styles.css
   //media.example.com/images/buybutton.gif
Wildcard certs come in handy here. Then it's up to your app code to make absolute _individual_ links and/or redirect (with or without https) when you want a page to be requested securely. Building URLs for internal resources is much more straight forward.

As for the referrer, let's just remember that if you're using a SSN as a primary key AND exposing that primary key in links, you've got more serious issues than "mixed content warning".

I actually sent a bug report one time to some search engine that they were building URLs wrong and not properly inheriting the protocol and they fixed it.


My biggest issue is CDN support. Our CDN does not support https so all of the js/css/img files come from http subdomain. If we serve them from the www ___domain (which does support https), we will duplicate a lot of content that we serve. So it's a no-win situation :(

How do you deal with this issue?


I agree that this is annoying. It seems like web browsers are shaped by the needs of whiney "security researchers" and their 1-in-100000 attacks, rather than by the needs of users. Sure, it's possible that someone is injecting malicious images into your secure webpage, but probably not. It's tough balancing the needs of ultimate security and application usability, but I think the pendulum is a little on the side of reactionary security right now.


I'm not sure you're aware of just how much fraud is perpetrated on the Internet.

You're also probably completely underestimating how common these attacks are - believe me, phishers use these attacks. They also use attacks many times more sophisticated. No attack is ever 1 in a million. If there is a loophole then a whole network of phishers will just on it very quickly.

The cost of this fraud is massive and it costs individuals and economies a lot of money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: