Monday, January 9, 2012

BURP SUITE - PART V: MAPPING THE TARGET

Today I will show you a quick overview of how you can quickly map the web infrastructure for a target website using the tools built into Burp Suite. I probably should have covered this in the beginning as it is a fairly basic task however I seem to get a lot of questions on it, so here we are. We will be focusing on the Target & Spider tabs for this tutorial. I will lay down the steps from setup to run and try to keep it in the usual easy to follow format, here goes…

We will start off as usual by configuring our browser to use proxy on localhost (127.0.0.1) and point it to port 8080 unless you have changed the default settings for Burp to listen on another port (as I have in example below – using 8181 as apache already running on 8080).


Once our browser is properly configured we can fire up Burp Suite by double clicking the jar file. Now we can start capturing our requests from the browser and playing with them within Burp. I like to turn the Interceptor off on the proxy tab so all requests flow as normal and can be easily picked up on the proxy history tab if/when/as needed. Now in order to map the target site we will first need to fire a request in our browser to the root directory of the site (i.e. http://www.targetsite.com/). Once it is run we pick up in Burp, turn off the Interceptor and get ready to start mapping things. You will now navigate to the proxy history tab to review the actual request that was sent. You will right click on the request and choose to “add item to scope”. In doing this we will be defining it as one of our target sites.


I know you can see the “spider from here” and I know it is tempting to jump ahead but if you want to have clear and easy to interpret results then hang tight for just one minute... Now that we have added the target site to our in-scope items we will further define how our spider and target settings will need to be for best results. We start with the Scope sub-tab under the Target tab. Here we can define what is to be considered in-scope. If you’re working on multiple sites or if you know two domains are linked you can add them in here as needed. This will be used to help us filter the results out from all of the other sites which will get drawn from crawling & spidering due to ads, photos, etc. You can add them as outlined above or you can copy and paste your URL links directly into this tab as needed. I should also call out that you can define or set the items to exclude from being considered in-scope. This is helpful in avoiding pages which might cause the crawler/spider to be logged out and thus result in incomplete results. The default settings are fine but this is where you can go in those pesky situations to help fine tune things.

NOTE: the sitemap sub-tab is where all of the results will be presented once the crawling/spidering begins; hang tight and we will come back to that one in a bit…

OK now that we have defined our scope we need to define the options for our crawler or spidering results. These options will define how the crawling is done, how forms are handled, etc. In the first half of the options you can define some general settings like checking for robots.txt file for added info or the crawl depth to use. You can also define the forms and how they are handled in this upper half. This allows you to fill out some basic info once and have it submitted automatically for you anytime the crawler comes across a form it will use smart technology to match as best it can. You can also have it ignore the forms altogether or go into interactive mode and have it prompt you for action on all forms it comes across. If it does have valid credentials then the crawler will obviously get a more in depth scan with more accurate results so if you have the proper credentials then enter them here if not just remember what you submit may show up in a report or in the logs so you may want to change the defaults if you’re on a paid gig ;)



NOTE: If you have a password protected shell getting uploaded to a sight and you can’t seem to find it, the application login prompt for guidance method is a good way to quickly find the link to your missing shell as you can check the links as they are found for authorization pages and then manually inspect to see if it is your shell (it works for me with high degree of success!).

The lower half of the options tab allows you to set the speed at which the spider engine will be running at. Please be aware of your own system limitations before adjusting the thread count to high – the default settings do seem to run fairly effective in most cases. You can also set header level information to be carried in the requests being made while crawling as well as choosing to unclick the HTTP/1.1 box and force communication in older less supported standards.




OK, now that we have everything properly setup for our scan we make one last edit which will make a huge difference in our results and how clear they are to interpret. We go to the Target tab and Sitemap sub-tab to activate our filters. You need to click on the grey area at the top which will expand when you click on it. In the filters section you can have it sort the results based on your liking (server response codes, by MIME types, file type, only parameterized requests - for you injectors out there, etc.). I like to setup the filters as I like them but the biggest most important one is setting to filter only in-scope items. This will block out the extra waste. If you don’t believe me try it without the filters and then try it afterwards again with filters to see how big a difference this small task makes.

It should appear as follows once the filters menu is expanded:


SPECIAL SIDE NOTE: IF you want to save the requests and/or responses from any session you need to define this from the start by clicking on the Options tab and then on the Misc. sub-tab. Here you can choose to log requests, responses, or both if you want. This is helpful if you want to manually review things after your work or if you want to pass the log results to another tool such as SQLMAP to be parsed for possible SQL injection vulnerabilities. If you choose to log anything it will prompt you to name the log file as well as defining the location for storage.


This should get you all setup. We now go back to the Target tab and the Sitemap sub-tab, where you will now see just your target site; right click on it and choose the “Spider this host” option to launch the crawling and spidering process.


Now you sit back and wait for a few minutes while Burp scanner does its thing. You can check the status of a current crawling/spider session by checking the Spider tab and Control sub-tab:


In order to see the final results simply navigate back to the Target tab and the Sitemap sub-tab and you will see all of the found links on the left hand side. You can expand folders and dynamic pages to see what is within them or what options can be passed. You can also see a list of the URL links in the right hand area as shown below:


The example above is rather simple in nature but this is an easy way for you to get a feel for what a site has going on. You can play with the options and the filters to see how it changes the results provided to you and customize to fit your need (for example, in some cases you may want to trace what other sites are communicating to see if you can exploit trust relationships and therefor might not want to filter any results in the sitemap sub-tab so that you can see everything linked). That sums up my brief overview on how you can use Burp Suite to map out a sites infrastructure in a quick and easy manner. You can use the options to your advantage to find authentication forms, lost shells you might have uploaded, or just to get a better picture of what you’re up against.  I encourage you to continue playing around with Burp Suite and all the tools it has to offer. I will continue to work on coming up with new material to share with you on how we can continue to squeeze out more usefulness out of Burp Suite tools, until next time – Enjoy!

PS - my apologies for not including these details in an earlier series for those who felt I left it out you should now have everything you need to get started and on your way with the basics J

No comments:

Post a Comment