RECON FOR DUMMIES

newrouge
InfoSec Write-ups
Published in
7 min readApr 12, 2021

--

Hey everyone, I hope you all are doing good. Now as i said i will be writing about creating my own recon methodology with all the tools available out there. Hunting on a target without reconnaissance always feels like something is missing, and obviously less attack surface also reconnaissance will help you understand your target better. one important thing is that this isn’t gonna be one of the extreme reconnaissance you may have seen, which scans the hosts periodically for new hosts and subdomains.It’s a simple reconnaissance before approaching your target.

Now before using tools you first need those tools to be added in your arsenal.For subdomain enumeration you need different tools like assetfinder, knock, subfinder, amass etc. Now you will need some tools to check for subdomain takeover, tools for visual reconnaissance, Github dorking , tools for crawling for urls , tools for parameter mining and then more tools then more.By this point you must be ok! that’s a lot to install even if you have some already installed but what if you can install all of them through one single script. Right! we will do it

Firstly install go on your system as most of the tools are written in go and its fast. Follow up here for that :

After installing go its time to install all those tools, for this i used “reconftw” by six2dez. As it will not only install all basic tools you need it but can also automate recon process for you and search for vulnerabilities. But i don’t suggest firing it straight up on your target as it’s very noisy and can get your IP blocked sometimes. Also manual hunting is fun.

Reconftw:

also read all the docs, specillay post installation guide :

As this tool also do passive enumeration for your target so you will need to get your API keys for amass and subfinder and other config files. It will be a time consuming process to get all these api keys but it will be worth it. Get atleast the free ones.

  1. Subdomain Enumeartion:

Now instead of running all those different enumeration tools for subdomain then combining results, I think it would be efficient if we let reconftw do this work for us. As we already invested our time in it.

./reconftw.sh -d target.com -r (for basic recon)

1.1 As you have now a list of subdomains like:

> admin.target.com, dev.target.com, api.target.com, shop.target.com etc.

1.2 It’s time to check that whether any http|https service is running on these subdomains or not for that we will use httprobe.

You can run httprobe on your list of subdomains by :

> cat subdomains.txt | httprobe

It will give you list of all subdomains which gave back some response to httprobe.

output : https://example.com, http://example.com

you can check for different ports also by default 80.

Tip: Please read manual page of all tools before you use it.

But i prefer to let reconftw do this also for me. reconftw will save all your outputs in Recon/targetname folder, having subdomains list, live website/urls list etc.

After httprobe time to quit reconftw .It’s time to go manual from here.

2. Let’s take screenshot of all target ,if you want to:

We will use gowitness for that, as reconftw already installed it.

> gowitness file <urlslist.txt>

or taking screen shot of one target at a time:

>gowitness single <onesingleurl>

It will take screenshots of response provided by server.

3. Collect all js files from your target:

3.1 First let collect all the urls from the target using gospider

after installation and running gospider you must have seen that it fetches all the urls from the target.

so let’s filter out js one once :

gospider -S urls.txt | grep js | tee -a js-urls.txt

saved all the output to js-urls.txt

3.2 still this js-urls.txt contain unwanted texts so let filter out only urls, after some trial and error i came up with this

cat js-urls.txt | grep -Eo ‘(http|https)://[^\”]+’ | cut -d ] -f 1 | sort -u

now you can save the output of this to a file which have all the js urls.

3.3 Now check if these urls are even accessible or not for further analysis. so let’s filter out 200 responses urls by curling them

cat real-js-urls.txt | parallel -j50 -q curl -w 'Status:%{http_code}\t Size:%{size_download}\t %{url_effective}\n' -o /dev/null -sk | grep Status:200 |cut -d “ “ -f 3 > js_200.txt

Now you have a list of active js files for analysis.

4. Using Linkfinder on js urls:

Although if you noticed gospider did this job already for you using linkfinder, in section 3.1. Anyway we will do it manually.

for i in $(cat js_200.txt);do echo “scanning $i” ; python3 /root/Tools/LinkFinder/linkfinder.py -i $i -o cli; done

Here we passed every js url to linkfinder and maybe you will find some juicy stuff.

5. Parameter Discovery:

Now you wanna find some parameters where you can look for ssrf,xss, rce or anything, basically enumerate all those endpoints which takes users inputs and process it. And remember all inputs can be malicious.

tools we gonna use for it :

Now what’s the basic idea is that we gonna crawl the target for all the urls and parameters, and then filter out the ones that takes some input.

e.g. Let’s say some commanon parameter name which takes url as input :

goto=, redirect=, url=, redirect_uri=

similarly there are some parameters which takes some input that can lead to xss, sqli, ssti, lfi etc. vulnerability.

Now one way will be picking a target then passing it to gau and waybackurls and then grepping for some patterns that you have seen is vulnerable.

e.g. waybackurls target.com | grep redirect_uri

or waybackurls target.com | grep =

or anything else.but you can’t remeber all the keywords and patterns for all vulnerabilties. That’s where tomnomnom’s another tool comes to rescue.

gf:

Reconftw installed it already with patterns also.

gf -list

to see that there are some patterns matching to vulnerabilties. you can create your pattern also, read docs.

Now you simple have to just pipe your ouput to | gf xss

it will simply grep for you all the possible parameters that can lead to xss

Let’s see it in action:

cat subdomains.txt | waybackurls | gf redirect | qsreplace | tee redirect.txt

We passed our subdomains to waybackurls which fetched all the known parameters and urls from wayback machine and then we grepped for parameter that can possibly lead to url redirection(that doesn’t can be a intended behaviour, don’t assume it as vulnerability). Then we used qsreplace which filters out only unique output and removes duplicates.

6. Discovering hidden parameters

Let’s say you are on some endpoint http://target.com/api/endpoint and wanna discover that is there any hidden parameters that application accept.

You can run arjun on your target and you may find something, it’s easy to use. Read it’s docs once.

By this time, you have:

  1. Decent subdomains list.
  2. Live Urls list
  3. Js list
  4. unique endpoint
  5. unique parameters list
  6. and other data that you can use for further reconnaissance if you want with tools you want.

Now that being sad i think it enough for me and you also to cover a pretty decent attack surface to hunt for vulnerabilties manually. Recon may be the key for finding more bugs but can’t be everything. You should have a keen eye to spot the weird behaviours and that comes with practice only. Go learn and hunt for new bug type and you will grow eventually.

Cover all bug types from here they are free and very good :)

Some extra reading material for you for better reconnaissance. Youu can always learn more from them:

Recon building block:

Manual Github dorking:

I hope this could be a informative blog. share it with your fellow hunters.Feel free to reach out for any feedback and improvements.

Maybe I will write another part for this recon process with more advanced target surface coverage. Also i will write about the bugs i have been learning and how to find them.

Till next week.

Thank you.

--

--