The Internet is a pretty seedy place, yet we are quite willing to hand over our secrets to a small group or trusted web apps. Heck, in recent years, we also started giving them capabilities: social networking sites often get to see your geolocation, and your instant messenger may be able to access your microphone or webcam feeds. Some of this does not even require your initial consent: certain browsers and plugins come with hardcoded domains that are permitted to install software updates, or change system settings at a whim.
Worse, content isolation on the web is very superficial - so even if the boundaries can be drawn, most types of privileged contexts can't distance themselves from the rest of the world, and expose just a handful of well-defined APIs. Instead, every non-trivial web application needs to heavily compensate for the risk of clickjacking, cross-request forgery, reflected cross-site scripting, and dozens of other attacks of that sort. All the developers eventually fail, by the way: show me a domain with no history of XSS, and I will show you a web application nobody cares about.
Unlike some other tough challenges in browser engineering, the risks of living with privileged applications could mitigated fairly nicely simply by requiring some effort up front: even without inventing any new security mechanisms, you could require applications to use origin cookies, have a sensible CSP policy, and use HSTS, before being allowed to prompt for extra privileges. It's not impossible to do something meaningful - it's just unpopular with the creators of privileged APIs.
But the problems with the clarity of robustness of application boundaries aside, there is also a third, perhaps more fascinating issue: what do you do if your web application execution context becomes corrupted in some way? As it turns out, there is no mechanism for the server to say that from now on, it wants to have a clean slate, and that the browser should drop or at least isolate any already running code, or previously stored data.
This seemingly odd wish is actually critical to web application security. For example, let's assume there is an XSS vulnerability in a web mail system or a social networking application. Because of the convenient but unfortunate design of HTML, such vulnerabilities are unavoidable, but we seldom wonder if it's possible to cleanly and predictably recover from them. Intuitively, patching the underlying bug, invalidating session cookies, and perhaps forcing password change, is all it should take; in fact, applications using
httponly cookies can often skip the last two steps.
And let's not forget open wireless networks: the problem there is about as bad. It does not matter that you are not logged into anything sensitive while visiting Starbucks. An invisible frame, a strategic write to
localStorage, or a bit of DNS or cache poisoning, is all it takes for the attacker to automatically elevate his privileges the moment you return to a safe environment and log back in.
With all that, and with the proliferation of mechanisms such as web workers and offline apps, we are rapidly approaching a point where recovering from a trivial XSS bug and other common web security lapses is getting almost as punishing as recovering from RCE - and for no good reason, too. Sure: today, it's so easy to phish users or exploit real RCE bugs, that backdooring web origins is not worth the effort. But in a not-too-distant future, that balance may shift.