Archive

Archive for the ‘Programming’ Category

Scapy: pcap 2 streams

Morning readers, I thought I would start Monday morning with another piece of Scapy/Python coding goodness. This time though for an added treat I’ve thrown in a bit of tshark not because Scapy isn’t awesome but for this piece of code tshark works much better.

The code today, takes a pcap file and extracts all the TCP and UDP streams into a folder of your choice. If you’ve ever used Wireshark then you know this is a useful feature and it’s something that I’ve put into sniffmypackets as it’s a good way to break down a pcap file for easier analysis.

The code isn’t perfect (when is it ever), at the moment you will get the standard “Running as user “root” and group “root”. This could be dangerous.” error message if like me you run this on Kali. If you are using “normal” Linux you should be fine. I am looking at how to suppress the messages but the Python subprocess module isn’t playing nice at the moment.

The code has 2 functions, one for handling TCP streams, the other for UDP streams. For TCP streams we use the tshark “Fields” option to run through the pcap file and list of the stream indexes, we save that to a list (if they don’t already exist) and then we re-run tshark through the pcap file pulling out each stream at a time using the index and then write it to another pcap file.

For the UDP streams it’s a bit more “special”, UDP streams don’t have an index the same as TCP streams do, so we have to be a bit more creative. For UDP streams we use Scapy to list all the conversations based on source ip, source port, destination ip, destination port we then add that to a list, we then create the reverse of that (I called it duplicate) and check the list, if the duplicate exists we delete it from the list.

Why do we do that?? Well because we can’t filter on stream, parsing the pcap file will list both sides of the UDP conversation which mean when we output it we end up with twice as many UDP streams as we should have. By creating the duplicate we can prevent that from happening (took me a while to figure that out originally).

Once we have created our list of UDP streams we then use tshark to re-run the pcap file but this time with a filter (the -R switch). The filter looks a bit like this:

tshark -r ' + pcap + ' -R "(ip.addr eq ' + s_ip + ' and ip.addr eq ' + d_ip + ') and (udp.port eq ' + str(s_port) + ' and udp.port eq ' + str(d_port) + ')" -w ' + dumpfile

The parts in bold are from the list we created when we pulled out all the UDP conversations. Each UDP stream is then saved to a separate file in the same directory as the TCP streams.

To run the code (which can be found HERE) you need to provide two command line variables. The first is the pcap file, the second is your folder to store the output. The code will create the folder if it doesn’t already exist.

./pcap-streams.py /tmp/test.pcap /tmp/output

When you run it, it will look something like this:

pcap2streams1

If you then browse to your output folder you should see something like this (depending on the number of streams etc.

pcap2stream2

So there you go, enjoy.

Categories: packets, Scapy

Scapy: pcap 2 convo

So the 3rd blog post of the day is a cool function in Scapy called conversations. Essentially this takes a pcap file and outputs an image of all the conversations between IP addresses. To run this in Scapy you would do something like this:

>>> pkts=rdpcap('test.pcap')
>>> pkts.conversations()

What you should get is an image pop up on your screen with all the IP conversations in it, now that’s some cool shit.

Now why would you write a script for something that simple?? Well if you want to output the file (i.e. save it) there seems to be a bug with Scapy that it errors (well does for me). If you try this..

>>> pkts.conversations(type='jpg',target='/tmp/1.jpg')

You get this:

>>> Error: dot: can't open /tmp/1.jpg

I did some research and it seems that command dot which is used to create the image, when you output it to a file has a slightly different syntax in the version on Kali.

So rather than raising a bug issue with Scapy I ported the code into my own python script (I was in a rush to use it in sniffmypackets).

The pcap-convo.py file can be found in my GitHub repo HERE:

To use the script use the following syntax:

./pcap-convo.py pcapfile outputfile

In real life that would be something like this:

./pcap-convo /root/pcap/test.pcap /tmp/out.jpg

Once it’s run you should see something like this:

pcap2convo1

Check your output file and you should have something that looks like this:

pcap2convo2

So there you go, another cool Python/Scapy lovechild.

Enjoy

Categories: packets, Scapy

Scapy: pcap 2 dns

So the second piece of code in my series on the python & Scapy lovefest is another simple bit of code that looks through a pcap file and pulls out some DNS information. The initial thought behind this was making it easy to look for DNS domains that might be “dodgy”, i.e. has lots of DNS answer records and a low(ish) TTL.

Now the downside of using Scapy to look at DNS records is that it’s very very difficult to pull out all the DNS answer records (the IP’s) that might be contained in a DNS Response packet. They are essentially nested in the Scapy packet and I haven’t worked out a way to return them all (yet). However fear not my packet warriors there is a DNS ancount field which tells you have many answers exist in the DNS response, so I’ve used that as a measure of how “dodgy” the domain is.

Again the script can be found on my GitHub repo HERE:

To run the script you simply need to do this:

./pcap-dns.py pcapfile

I’m not going to post the code on the post this time (formatting is a bitch) but you should get something like this back (I used a fastflux pcap file for the demo).

pcap2dns

The red coloured lines are DNS responses that have over 4 IP addresses in the response, you can change this in the code if you want but it does give you a good idea of what is going on. I also throw back the TTL because if it is something like Fast Flux it would probably have a high number of ancount records and a low TTL.

Simples..

Categories: packets, Scapy

Scapy: pcap 2 geoip

So I’ve been a bit “relaxed” lately with blog posts, simply because I’ve not had anything to say or share. To be honest the last couple of months my training has gone a bit all over the place and I’ve not really focused on much other than some changes to sniffmypackets.

What I have been doing for the last few months though, as part of my sniffmypackets work is writing python code that then goes into a Maltego Transform. Most of these are Scapy based and have been sitting in a private GitHub repo. Rather than let them gather dust I thought I would share them with you here. That means for the next few weeks expect a lot more posts, all python & Scapy related.

Today I’m going to share with you my pcap-geoip python script. It’s nice and simple using the MaxMind’s geoip database it looks through a pcap file (of your choice) and returns you the Latitude, Longitude and Google Maps URL for all public IP addresses. If it spots a private IP address it gives you a warning and continues on it’s way.

First off you need the MaxMinds GeoIP database, you can download this manually if you want put I have a script that will do it for you.

NOTE: Both of these files are available in my GitHub repo HERE:

geoipdownload.sh

#!/bin/sh
mkdir /opt/geoipdb
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz -O /tmp/geoipdb.dat.gz
gzip -d /tmp/geoipdb.dat.gz
mv /tmp/geoipdb.dat /opt/geoipdb/

Like I said nice and easy.. Feel free to change the download location as you wish but you will need to change the python script to accommodate the change.

Next up is the python script (again sorry if I offend the python kings and queens with my code):

The script only needs one user defined variable to run, which is the location of the pcap file. To run the script (once you’ve updated the geoip database location) just type:

./pcap-geoip.py test.pcap

Within the python file you need to change this to match your geoip database location:

# Change this to point to your geoip database file
gi = pygeoip.GeoIP('/opt/geoipdb/geoipdb.dat')

pcap-geoip.py

#!/usr/bin/env python

import pygeoip, sys
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *

# Add some colouring for printing packets later
YELLOW = '33[93m'
GREEN = '33[92m'
END = '33[0m'
RED = '33[91m'

# Change this to point to your geoip database file
gi = pygeoip.GeoIP('/opt/geoipdb/geoipdb.dat')

if len(sys.argv) != 2:
print 'Usage is ./pcap-geoip.py pcap-file'
print 'Example - ./pcap-geoip.py sample.pcap'
sys.exit(1)

pkts = rdpcap(sys.argv[1])

def geoip_locate(pkts):

ip_raw = []
# Exclude "local" IP address ranges
ip_exclusions = ['192.168.', '172.', '10.', '127.']

for x in pkts:
if x.haslayer(IP):
src = x.getlayer(IP).src
if src != '0.0.0.0':
if src not in ip_raw:
ip_raw.append(src)

for s in ip_raw:
# Check to see if IP address is "local"
if ip_exclusions[0] in s or ip_exclusions[1] in s or ip_exclusions[2] in s or ip_exclusions[3] in s:
print YELLOW + 'Error local Address Found ' + str(s) + END
else:
# Lookup the IP addresses and return some values
rec = gi.record_by_addr(s)
lng = rec['longitude']
lat = rec['latitude']
google_map_url = 'https://maps.google.co.uk/maps?z=20&q=%s,%s' %(lat, lng)
print GREEN + '[*] IP: ' + s + ', Latitude: ' +str(lat)+ ', Longtitude: ' +str(lng) + ', ' + google_map_url + END

geoip_locate(pkts)

And there you have it, enjoy.

Categories: packets, Scapy

Python: Kippo 2 Cuckoo

So I’m a bit late with this blog post as I wrote the code a couple of weeks ago, but as they say “better late than never”.

A couple of weeks ago on Twitter @stevelord raised a question about the existence of some code that would allow you to take the files that evil hackers upload to a Kippo SSH Honeypot and run them through a Cuckoo Sandbox instance. The response was “No” such a thing didn’t exist but wouldn’t it be great if it did.

Well that’s more than enough reason for me to have a go at writing something, the code I put online (via my GitHub repo) is still quite raw, I’ve not had any feedback to say if it works properly or not but the limited testing I did seemed to indicate that it does. So this is how it works.

On your Kippo honeypot upload the python script and ensure that it has read access to your kippo directories. There are a number of variables in the script that might need changing these are:

# User defined settings - change these if they are different for your setup
sandbox_ip = '127.0.0.1' # Define the IP for the Cuckoo sandbox API
sandbox_port = '8090' # Here if you need to change the port that the Cuckoo API runs on
kip_dl = '/var/kippo/dl/' # Kippo dl file location - make sure to include trailing /
file_history = 'history.txt' # Keeps a history of files submitted to Cuckoo
log_file = 'output-log.txt' # Log file for storing a list of submitted files and the task ID from Cuckoo

Most of them should make sense to you, but you will need to change them to match your environment (give me a shout if you get stuck).

Once that’s all in place the script works like this.

1. Creates a new history.txt file if it doesn’t exist (in the same directory as the script is located).
2. Creates a catalogue of files in the download directory for kippo.
3. Checks to see if that file already exists in history.txt (so we don’t submit files more than once).
4. Submits the files one at a time to your Cuckoo sandbox using the Cuckoo api server (which needs to be running).
5. Updates the history.txt fiie with the new files.
6. Dumps out the task id and status code so you know if it’s worked.

And that’s it really, I still need to work on the logging side of things, the log_file variable doesn’t work (as in nothing calls it) so that’s next on my list to resolve. If you deleted the history.txt file it will case the script to submit everything again so be careful with it.

The script can be found HERE.

I hope it may be of use to you, and as always let me know if it doesn’t work (create an issue ticket on GitHub).

Categories: Honeypot, Malware, Python

Scapy – pcap IP rewrite

Hello reader(s), this is just a quick post to share some new code I wrote tonight, you might find it useful or you might not.

So I’ve been trying to think of some new transforms to write for sniffMyPackets and thought it would be cool to take a TCP stream and rewrite the source and destination IP addresses and then recalculate the IP/TCP checksums so you can “replay” the packets if you so desire.

The code is as always written in python and uses Scapy to work the magic, the link to the code is at the bottom of the page, and is stored in another one of my GitHub repo’s specially created to share all the “junk” code I write and want to share with you guys..

The usage is really simple, tell the script you source pcap file (the one that you want to rewrite), give it the new source and destination IP addresses and then finally the location you want to store the output pcap file. For example:

./pcap-rewrite.py input.pcap 192.168.1.1 192.168.1.254 /tmp/output.pcap

Now just to make clear this is to work on individual TCP streams (might even work on UDP thinking about it), so 1 source IP, 1 destination IP. NOT pcaps files with lots of different IP’s.

When you run the script with the variables defined above (did I just use variables in a sentence..too much coding). You should get something like this:

terminalwindow

What you should get inside of your pcap file is something that looks like this (well not like this as it’s my pcap file):

wireshark-new

Which use to look like this:

wireshark-old

Notice all the green in the new rewritten pcap file (the first wireshark image for those not paying attention). That’s because both the IP and TCP checksum have been deleted and then Scapy recalculate it when the IP addresses are changed (thanks to Judy Novak for that bit of useful information).

So without making you read any more here’s the code: DOWNLOAD

Code: Junk Email Downloader

So a while back someone on Twitter (sorry can’t remember who..) mentioned that when looking for sources of Malware to analyse you shouldn’t overlook your junk/spam emails. What a good idea I thought, lets write some code to do that for me.

I’ve quickly thrown together my “Junk Email Downloader” python script which can be found HERE.

The idea being that you have a mailbox that is just used for JUNK (I use Hotmail as I get a lot of junk via that account). The script will connect to any POP3 server download the emails (and delete them after, so you’ve been warned), once it has downloaded the emails it pulls out the Sender IP, and a list of any URL’s it finds (based on href tags). It does a bit of GeoIP analysis on both (so you need the MaxMinds database) and writes it out to a text file (will look at making more use of that later).

After that it makes an HTTP request to each URL checking to see if it gets a 200 response back (just to make sure the URL’s are still available). For each 200 response it then submits it to VirusTotal via their API for analysis (sorry about the multiple requests guys).

It’s still a work in progress but at over 100 lines of code its the biggest script I’ve ever written so hopefully you might find it useful. Once I’ve tweaked it a bit I’m going to run it on my Raspberry PI, the idea being that it will run once an hour or so.

In the future I will add some more VirusTotal API calls, such as IP/Domain lookup and build in Cuckoo Sandbox API calls so you can submit the URL’s to your own Sandbox for analysis.

Have fun and let me know what you think.

Categories: Malware, Python

Code: PDF hunter

So of late I’ve been playing around a lot with Scapy and pcap files, mostly for my sniffMyPackets project but also because it teaches me more about network forensics and python. The other area I’m starting to learn about is Malware Analysis and I’ve been spending some time looking at the Honeynet Project challenges.

One of the challenges to is to find the malicious content within a PDF file that is provided to you in a pcap file. Normally I would just reach for Network Miner and rebuild the file(s) that way but I wanted to see if I could write some code myself.

The goal of my code was simple, parse through a pcap file, identify a PDF and then rebuild the file so that if a tool such as exiftool or file was used that it would correctly be identified as a PDF and that you could open the PDF and view the content (if you wanted to).

I follow a certain process when I’m carving up pcap files, it’s not rocket science really just common sense. First off find the packets you are interested in, I tend to use a mix of Wireshark and Scapy for this and then look for something you can use to filter down to the packets you want before getting into the nitty gritty of carving them up.

For this piece of code I need to find some way of identifying a PDF file in a pcap file and as most PDF files will appear in a pcap file as part of an HTTP conversation, I parse each packet and if the packet has a Raw layer (a raw layer in Scapy is essentially the payload of a packet) then I look for this ‘Content-Type: application/pdf’. If this is matched then I store the TCP ACK number as a variable for use later.

Now once I have the ACK number I then need to find all the packets that relate to this in order to get the whole file. Now it turns out the ACK is the same for all the packets that the PDF download is in (something I didn’t realise until I started this) so it’s a simple case of using the following code to find all the packets I’m after:

for p in pkts:
if p.haslayer(TCP) and p.haslayer(Raw) and (p.getlayer(TCP).ack == int(ack) or p.getlayer(TCP).seq == int(ack)):
raw = p.getlayer(Raw).load
cfile.append(raw)

If either the TCP ACK or SEQ match our stored ACK variable we get the Raw layer and store it into a python list. This means that we now have (hopefully) all the packets that make up the PDF stored nicely away and because it’s a TCP conversation they should all be in the right order.

Now that we have all the packets we write those out to a temporary file, it’s a temporary file because if you were to open it in a text editor you would see all the HTTP headers at the top and the bottom, which means if you ran file against it, then you would get back a file type of “data” and not “PDF” (which is what we are after).

So we then have to do some python magic (well I think it’s magic), to slice the rubbish out. Now this is the part that took me the longest to figure out. If you have ever looked at a PDF file in a text editor (I wouldn’t blame you if you haven’t), you would notice that they start with “%PDF-” and end with “%%EOF” so finding the start of a PDF file is easy, the problem is that a PDF file can have multiple %%EOF towards the end of the file and I kept cutting at the wrong point.

To fix this I came up with a bit of a long-winded way of carving the temporary file up (see the code below):

# Open the temp file, cut the HTTP headers out and then save it again as a PDF
total_lines = ''
firstcut = ''
secondcut = ''
final_cut = ''

f = open(tmpfile, 'r').readlines()

total_lines = len(f)

for x, line in enumerate(f):
if start in line:
firstcut = int(x)

for y, line in enumerate(f):
if end in line:
secondcut = int(y) + 1

f = f[firstcut:]

if int(total_lines) - int(secondcut) != 0:
final_cut = int(total_lines) - int(secondcut)
f = f[:-final_cut]
outfile2.writelines(f)
outfile2.close()
else:
outfile2.writelines(f)
outfile2.close()

If you read Python awesome, if you don’t here’s what happens.

First off I open the temporary file and count the number of lines, I look for the variable I declared at the start of the code as start (which is this: start = str(‘%PDF-’)), if that’s matched it stores the line number as the variable firstcut

I then need to find the last cut, I look for the variable end (which is this: end = str(‘%%EOF’)) now remember I said a PDF can have multiple EOF statements, well I get round that because Python overrides the variable secondcut each time it’s matched so the last line with EOF is always the one used. I also add a +1 to the line number because for the next chunk of code if I didn’t I would actually cut the final %%EOF file the file (I know this because I did it, before realising what was happening).

So we now do a simple little IF statement to make sure that there is something at the end of the file to cut (sometimes there isn’t on the pcap files I’ve used/made) and if there is we slice the bad HTTP headers out before saving the file. If there isn’t anything to cut then we just save the file.

Hopefully that makes sense to non-python people (I can but hope).

I’ve tested this on a number of different pcap files that have PDF downloads in them and it works, I can open and view the PDF and if I run file or exitfool against it then it appears as a normal PDF. I’m sure there are some cases when it won’t work 100% but if you find something that doesn’t let me know so I can try to fix it.

The code can be found here: https://github.com/catalyst256/PDFHunter (in my ever-growing GitHub repo). Oh and I’ve added this function into my sniffMyPackets transform pack.

Enjoy!

Categories: General, packets, Python, Scapy

sniffMyPackets: Finding Tor

I don’t normally do short random posts but I figure once in a while won’t hurt.

So I’ve been busy working on new transforms for my Maltego pcap analysis package and things are moving along nicely. Part of my process is making notes on things I think would be cool to see and then working my way through the list.

Over the weekend I added “Tor Traffic” to my list, I know most of the traffic is encrypted so wasn’t sure if I could get an end result from it but figure it was worth a look.

Anyway I ‘ve thrown together some Scapy/Python code (soon to be a transform if it’s right) that I think will highlight Tor traffic in a pcap file. Now this is just a work in progress so let me know if I’ve miles off the mark (so to speak).

I created a pcap file by stopping the Tor service on my copy of Backtrack and then starting it again while capturing some packets. I’ve also tested it on another pcap file from the internet with some Tor bot traffic and the results are similar.

Looking through the pcap file I noticed some “strange” entries during the SSL handshake that lead me to my PoC code (no I didn’t Google first to see if it already existed).

During the SSL handshake the “Client Hello” packet includes a Server Name record which in a “normal” handshake might be similar to http://www.google.com however with a Tor SSL handshake its something like “http://www.wth7pbtqsw6.com“, which if you ping doesn’t actually exist.

The other thing to note is that the SSL server name doesn’t have a corresponding DNS query, in fact for all the packets in the pcap file there are no DNS queries/responses which is another way to narrow down possible Tor traffic.

The sniffMyPackets transform can be found HERE:

So basically my python code reads a pcap file and looks for any TCP packet with a payload, that has http://www.xxxxxx in it. It then pulls out the key information (src ip, dst ip, sport, dport and www. value). and just displays it as neat little lines.

The code is below:

#!/usr/bin/env python

import logging, sys
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *

pcap = sys.argv[1]

pkts = rdpcap(pcap)

for x in pkts:
if x.haslayer(TCP) and x.haslayer(Raw):
if 'www.' in x.getlayer(Raw).load:
for s in re.finditer('www.\w*.\w*', str(x)):
dnsrec = s.group()
srcip = x.getlayer(IP).src
dstip = x.getlayer(IP).dst
sport = x.getlayer(TCP).sport
dport = x.getlayer(TCP).dport
ipaddr = srcip, dstip, sport, dport, dnsrec
print ipaddr

If I run this against my pcap file I get these results:

root@bt:~# ./lookfortor.py /root/pcaps/tor-startup.pcap
src: 192.168.1.66 dst: 89.160.29.195 sport: 40651 dport: 9001 dnsrec: http://www.qqsuvxwbs.com
src: 89.160.29.195 dst: 192.168.1.66 sport: 9001 dport: 40651 dnsrec: http://www.vlkkj3kgxh56ibujher.net0
src: 192.168.1.66 dst: 86.59.119.83 sport: 48459 dport: 443 dnsrec: http://www.buukx57zhxo2ujugeevlveb.com
src: 86.59.119.83 dst: 192.168.1.66 sport: 443 dport: 48459 dnsrec: http://www.yhlkhxjmzg3.net0
src: 192.168.1.66 dst: 38.229.70.42 sport: 44829 dport: 443 dnsrec: http://www.wth7pbtqsw6.com
src: 38.229.70.42 dst: 192.168.1.66 sport: 443 dport: 44829 dnsrec: http://www.uvk2wwbbvwpqpkux.net0
root@bt:~#

What you will notice is that the reply packet has a different dnsrec which always has a 0 (zero) on the end..

I’ve tested this on a few pcap files and the results look consistent. Let me know what you think.

Categories: Python, sniffMyPackets

sniffMyPackets (Beta) – Released!!

Hello readers, so I just want to say something before I get into the “meat” of this post…. (bear with me)

I don’t work in InfoSec, I don’t have a full-time job where I poke holes in systems, or look at IDS logs and pcap files all day long (would be nice but..) but what I do have is a passion for InfoSec and a desire to give back to the community. I’ve never met 98% of the people who read this blog, but over the last 15 months a lot of you have inspired me to push myself harder and further than I thought possible so this is my way to pay some of that back..

I give you sniffMyPackets, Maltego transforms (based on the Canari Framework) for analysing pcap files..

I decided to write these transforms after the Cyber Security Challenge published a cipher challenge that centered around a pcap file, that and my interest in packets meant this seems liked a good way to mix the two (no I haven’t cracked the cipher challenge).

Now this is a BETA now “beta” means different things to different people so this is my definition:

1. I’m not a developer/coder, I’ve been writing stuff in Python for less than a year. The transforms in this package work and while not perfect they are functioning. I’ve tested them against as many different pcaps as I feel is necessary to make sure they work properly.

2. This is a BETA, so things like error handling are missing from some transforms, I will get around to it but I’m learning as I go so bear with me.

3. This is by no means a finished product, I will continue to add transforms as I go and if you want anything specific let me know and I will add it (or you can add it yourself), but please send me a pcap file if you can.

4. If (more likely when) you find a problem log it as an issue on the github.com site, the same if you think of something that will improve the package.

5. I don’t have a full license of Maltego so I’ve only tested with the community edition within Backtrack (unless someone wants to donate a license..).

6. I’ve only tested this on Backtrack, for 2 reasons.. 1) I don’t own a Mac Book, 2) Windows doesn’t play nicely with Scapy and lets face it, it’s not the best platform for pcap analysis.

I think that’s about it in terms of “BETA”. I’ve written a lot of transforms using Scapy (yes yes I love Scapy) but as people often say, “Use the best tool for the job” so some of them use tshark which has been cool because I’ve learnt a lot more about both tools. Writing python code with Scapy isn’t hard (even for me), but making them “appear” nicely in Maltego has been challenging at times and I’m not 100% happy with the way the entities link, but like I said it’s a BETA… :)

I’ve created a wiki that lists the entities and transforms available and I will update them as I go, you can find the wiki here: https://github.com/catalyst256/sniffMyPackets/wiki

If you want to have a play with the transforms you can go here: https://github.com/catalyst256/sniffMyPackets

I also did a short video about the transforms (some of which have changed now) but you can find it here:

So have a play, let me know what you think (good or bad) and I will let you know about updates (new transforms etc etc) when I write them. I may even create a new twitter account so I don’t annoy people with updates all the time.

Categories: Python, sniffMyPackets
Follow

Get every new post delivered to your Inbox.

Join 677 other followers