Archive

Archive for the ‘Programming’ Category

Scapy: Heartbleed

So I might be a bit late to the game but I post this code on Twitter a while back but then forgot to blog about so here you go…

I’ve written a little snippet of Python code that uses Scapy to search through a pcap file looking for Heartbleed requests and responses. Due to Scapy not having a layer for TLS connections this have been done by slicing and dicing the RAW layer and pulling out the information we need. A lot of the time that I’m writing Scapy code to analyse pcap files I use Wireshark to look at the packets and match that up in Scapy.

This is the output from Wireshark;

Screen Shot 2014-07-25 at 07.24.03

And this is what Scapy sees;

Raw load='\x18\x03\x02\x00\x03\x01@\x00'

Currently the script just looks for traffic on port 443, but I will tweak it over the next week or two to handle any port.

I’ve removed the port restrictions so it should be protocol agnostic.. (hopefully)..

You can find the code HERE:

To run it, just type

./pcap-heartbleed.py [pcapfile]

So for example:

./pcap-heartbleed.py heartbleed.pcap

If it finds anything it thinks is a Heartbleed packet you get an output similar to the one below (I’ve changed the IP addresses):

Heartbleed Request: src: 1.1.1.1 dst: 2.2.2.2
Heartbleed Response: src: 2.2.2.2 dst: 1.1.1.1
Heartbleed Request: src: 1.1.1.1 dst: 2.2.2.2
Heartbleed Response: src: 2.2.2.2 dst: 1.1.1.1
Heartbleed Request: src: 1.1.1.1 dst: 2.2.2.2
Heartbleed Response: src: 2.2.2.2 dst: 1.1.1.1

Enjoy!!

Categories: Python, packets, Scapy

Gobbler: Eating its way through your pcap files…

So a while back I blogged about the future of sniffMyPackets and how I was looking at building components for it that would make better use of existing systems you may be using (not over tooling things). Gobbler is the first step of that journey and is now available for your amusement..

Gobbler can be found HERE:

Essentially you feed Gobbler a pcap file and tell it how you want the data outputted. The current supported outputs are:

1. Splunk (via TCP)
2. Splunk (via UDP)
3. JSON (just prints to screen at the moment).

The tool is based on Scapy (as always) and once the initial file read is done it’s actually quite quick (well I think it is).

Moving forward I am going to add dumping into databases (mongodb to start with) and Elastic search. On top of this I’m going to start writing protocol parsers for Scapy that aren’t “available”. I’ve found an HTTP one written by @steevebarbeau that is included in Gobbler so you can now get better Scapy/HTTP dumps.

To use with Splunk is easy, first in your Splunk web console create a new TCP or UDP listener (see screenshots below).

SplunkDataInputs

SplunkUDPListener

NOTE: UDP Listeners are better for any pcap with over 10,000 packets. I’ve had issues with TCP listeners past that point.

Once your Splunk Listener is ready, just “tweak” the gobbler.conf file to meet your needs:

[splunk]
server = ‘localhost’
port = 10000
protocol = ‘udp’

If you created a TCP listener, change the protocol to ‘tcp’ and gobbler will work out the rest for you.

To run gobbler is easy (as it should be), from the gobbler directory run this command:

./gobbler.py -p [pcapfile] -u [upload type]

I’ve included a 1 packet pcap in the repo so you can straight away test using:

./gobbler.py -p pcaps/test.pcap -u splunk

or if you want to see the results straight away try the json option:

./gobbler.py - pcaps/test.pcap -u json

If you load the packets into Splunk, then they will look something like this:

SplunkGobblerPacket

This means if you were to search Splunk for any packet from a Source IP (of your choice) you can just use (test_index is where I am storing them):

index="test_index" "ip_src=12.129.199.110"

This means you can then use the built-in GeoIP lookup to find out the City etc using this Splunk Search:

index="test_index" | iplocation ip_src

Which would give you back the additional fields so you can do stuff like this:

SplunkIPLocate

The benefit of using Gobbler is that I’ve done all the hard work on the formatting, I was going to write a Splunk app but the people at Splunk have been ignoring me so…

I’ve successfully managed to import about 40,000 packets via a UDP listening in just a few minutes, it’s not lightning quick but it’s a start.

Enjoy..

PS. A “rebranding” of my core projects will be underway soon so stay tuned for more updates.. :)

Categories: Scapy, sniffMyPackets

Scapy – Iterating over DNS Responses

So while doing my Scapy Workshop at BSides London the other week, I stated that iterating over DNS response records with Scapy is a bit of a ball ache. Well I will be honest, I was kind of wrong. It’s not that difficult it’s just not that pretty.

This is an example of a DNS response (Answer) packet when running a dig against http://www.google.com

Screen Shot 2014-05-12 at 08.14.07

You will see that there are 5 DNSRR layers in the packet, now when you ask Scapy to return the rdata for those layers you will only get the first one (in the Scapy code below pkts[1] refers to the second packet in the pcap which is the response packet).

pkts[1][DNSRR].rdata
‘173.194.41.148’

In order to get the rest, you need to iterate over the additional layers.

pkts[1][5].rdata
‘173.194.41.148’
pkts[1][6].rdata
‘173.194.41.144’

In the example above the [5] and [6] are the layer “numbers” so to make that a bit easier to understand.

pkts[1][0] = Ether
pkts[1][1] = IP
pkts[1][2] = UDP
pkts[1][3] = DNS
pkts[1][4] = DNSQR
pkts[1][5-9] = DNSRR

So the code below is a quick way to iterate over the DNS responses by using the ancount field to determine the number of responses and then working backwards through the layers to show all the values.

#!/usr/bin/env python

from scapy.all import *

pcap = 'dns.pcap'
pkts = rdpcap(pcap)

for p in pkts:
if p.haslayer(DNSRR):
a_count = p[DNS].ancount
i = a_count + 4
while i > 4:
print p[0][i].rdata, p[0][i].rrname
i -= 1

So there you go, quick and dirty Scapy/Python code.

Enjoy!!

Categories: Scapy

BSides London 2014 – Scapy Workshop

So this week (Tuesday) was the 4th annual BSides London event held at the Kensington and Chelsea Town Hall (same venue as last year). For the last 3 years I’ve attended the event as not only a participant but also as a crew member, helping make the event awesome (which it is every year) and making it a tradition not to see ANY of the talks.. For me BSides is more about taking part and just meeting loads of cool people rather than going to all the talks and snagging all the free stuff, well ok apart from the MWR t-shirts but they are awesome.

This year however was slightly different, a new twist on an already awesome day. Leading up to the event I was busy helping (I use the word loosely) keep the website up-to-date (oh how I hate HTML) and generally just counting down the days.

This year I had planned on attending one workshop on a subject very close to my heart.. Scapy, which was due to be run by Matt Erasmus (@undeadsecurity). However the Thursday before BSides Matt had to pull out which left a 2 hour slot in the BSides schedule free (Can you guess where this is going??).

Friday morning I get an email from Iggy (@GeekChickUK) asking if I fancied running the workshop instead. No my initial panic fuelled response was going to be “God no” but then I thought “Why not, what’s the worse that can happen).

The next 3 days were rammed with me writing a new workshop for a group (well I was hoping at least 1) of people that would most likely either professional infosec ninja’s (they are all ninja’s right??) or at least be able to point out my mistakes when I made them.

On the day I think about 18 people attended my workshop, most of them laughed at my jokes and most of them (hopefully) learnt something new about how awesome Scapy is. I’ve just found out the online feedback form for the whole BSides London event contains questions about my workshop so I will leave judging it’s success till I see those..

If nothing else it’s made me want to run more workshops, not just on Scapy but on other areas as I learn them, so I just want to say a BIG THANK YOU to Iggy for giving me that nudge and of course all the people that attended my workshop on the day.

The GitHub repo is HERE
The Slide Pack is HERE
The Scapy Cheat Card (pdf version) is HERE

Enjoy

Categories: BSidesLondon, Scapy

Scapy: pcap 2 streams

Morning readers, I thought I would start Monday morning with another piece of Scapy/Python coding goodness. This time though for an added treat I’ve thrown in a bit of tshark not because Scapy isn’t awesome but for this piece of code tshark works much better.

The code today, takes a pcap file and extracts all the TCP and UDP streams into a folder of your choice. If you’ve ever used Wireshark then you know this is a useful feature and it’s something that I’ve put into sniffmypackets as it’s a good way to break down a pcap file for easier analysis.

The code isn’t perfect (when is it ever), at the moment you will get the standard “Running as user “root” and group “root”. This could be dangerous.” error message if like me you run this on Kali. If you are using “normal” Linux you should be fine. I am looking at how to suppress the messages but the Python subprocess module isn’t playing nice at the moment.

The code has 2 functions, one for handling TCP streams, the other for UDP streams. For TCP streams we use the tshark “Fields” option to run through the pcap file and list of the stream indexes, we save that to a list (if they don’t already exist) and then we re-run tshark through the pcap file pulling out each stream at a time using the index and then write it to another pcap file.

For the UDP streams it’s a bit more “special”, UDP streams don’t have an index the same as TCP streams do, so we have to be a bit more creative. For UDP streams we use Scapy to list all the conversations based on source ip, source port, destination ip, destination port we then add that to a list, we then create the reverse of that (I called it duplicate) and check the list, if the duplicate exists we delete it from the list.

Why do we do that?? Well because we can’t filter on stream, parsing the pcap file will list both sides of the UDP conversation which mean when we output it we end up with twice as many UDP streams as we should have. By creating the duplicate we can prevent that from happening (took me a while to figure that out originally).

Once we have created our list of UDP streams we then use tshark to re-run the pcap file but this time with a filter (the -R switch). The filter looks a bit like this:

tshark -r ' + pcap + ' -R "(ip.addr eq ' + s_ip + ' and ip.addr eq ' + d_ip + ') and (udp.port eq ' + str(s_port) + ' and udp.port eq ' + str(d_port) + ')" -w ' + dumpfile

The parts in bold are from the list we created when we pulled out all the UDP conversations. Each UDP stream is then saved to a separate file in the same directory as the TCP streams.

To run the code (which can be found HERE) you need to provide two command line variables. The first is the pcap file, the second is your folder to store the output. The code will create the folder if it doesn’t already exist.

./pcap-streams.py /tmp/test.pcap /tmp/output

When you run it, it will look something like this:

pcap2streams1

If you then browse to your output folder you should see something like this (depending on the number of streams etc.

pcap2stream2

So there you go, enjoy.

Categories: packets, Scapy

Scapy: pcap 2 convo

So the 3rd blog post of the day is a cool function in Scapy called conversations. Essentially this takes a pcap file and outputs an image of all the conversations between IP addresses. To run this in Scapy you would do something like this:

>>> pkts=rdpcap('test.pcap')
>>> pkts.conversations()

What you should get is an image pop up on your screen with all the IP conversations in it, now that’s some cool shit.

Now why would you write a script for something that simple?? Well if you want to output the file (i.e. save it) there seems to be a bug with Scapy that it errors (well does for me). If you try this..

>>> pkts.conversations(type='jpg',target='/tmp/1.jpg')

You get this:

>>> Error: dot: can't open /tmp/1.jpg

I did some research and it seems that command dot which is used to create the image, when you output it to a file has a slightly different syntax in the version on Kali.

So rather than raising a bug issue with Scapy I ported the code into my own python script (I was in a rush to use it in sniffmypackets).

The pcap-convo.py file can be found in my GitHub repo HERE:

To use the script use the following syntax:

./pcap-convo.py pcapfile outputfile

In real life that would be something like this:

./pcap-convo /root/pcap/test.pcap /tmp/out.jpg

Once it’s run you should see something like this:

pcap2convo1

Check your output file and you should have something that looks like this:

pcap2convo2

So there you go, another cool Python/Scapy lovechild.

Enjoy

Categories: packets, Scapy

Scapy: pcap 2 dns

So the second piece of code in my series on the python & Scapy lovefest is another simple bit of code that looks through a pcap file and pulls out some DNS information. The initial thought behind this was making it easy to look for DNS domains that might be “dodgy”, i.e. has lots of DNS answer records and a low(ish) TTL.

Now the downside of using Scapy to look at DNS records is that it’s very very difficult to pull out all the DNS answer records (the IP’s) that might be contained in a DNS Response packet. They are essentially nested in the Scapy packet and I haven’t worked out a way to return them all (yet). However fear not my packet warriors there is a DNS ancount field which tells you have many answers exist in the DNS response, so I’ve used that as a measure of how “dodgy” the domain is.

Again the script can be found on my GitHub repo HERE:

To run the script you simply need to do this:

./pcap-dns.py pcapfile

I’m not going to post the code on the post this time (formatting is a bitch) but you should get something like this back (I used a fastflux pcap file for the demo).

pcap2dns

The red coloured lines are DNS responses that have over 4 IP addresses in the response, you can change this in the code if you want but it does give you a good idea of what is going on. I also throw back the TTL because if it is something like Fast Flux it would probably have a high number of ancount records and a low TTL.

Simples..

Categories: packets, Scapy

Scapy: pcap 2 geoip

So I’ve been a bit “relaxed” lately with blog posts, simply because I’ve not had anything to say or share. To be honest the last couple of months my training has gone a bit all over the place and I’ve not really focused on much other than some changes to sniffmypackets.

What I have been doing for the last few months though, as part of my sniffmypackets work is writing python code that then goes into a Maltego Transform. Most of these are Scapy based and have been sitting in a private GitHub repo. Rather than let them gather dust I thought I would share them with you here. That means for the next few weeks expect a lot more posts, all python & Scapy related.

Today I’m going to share with you my pcap-geoip python script. It’s nice and simple using the MaxMind’s geoip database it looks through a pcap file (of your choice) and returns you the Latitude, Longitude and Google Maps URL for all public IP addresses. If it spots a private IP address it gives you a warning and continues on it’s way.

First off you need the MaxMinds GeoIP database, you can download this manually if you want put I have a script that will do it for you.

NOTE: Both of these files are available in my GitHub repo HERE:

geoipdownload.sh

#!/bin/sh
mkdir /opt/geoipdb
wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz -O /tmp/geoipdb.dat.gz
gzip -d /tmp/geoipdb.dat.gz
mv /tmp/geoipdb.dat /opt/geoipdb/

Like I said nice and easy.. Feel free to change the download location as you wish but you will need to change the python script to accommodate the change.

Next up is the python script (again sorry if I offend the python kings and queens with my code):

The script only needs one user defined variable to run, which is the location of the pcap file. To run the script (once you’ve updated the geoip database location) just type:

./pcap-geoip.py test.pcap

Within the python file you need to change this to match your geoip database location:

# Change this to point to your geoip database file
gi = pygeoip.GeoIP('/opt/geoipdb/geoipdb.dat')

pcap-geoip.py

#!/usr/bin/env python

import pygeoip, sys
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import *

# Add some colouring for printing packets later
YELLOW = '33[93m'
GREEN = '33[92m'
END = '33[0m'
RED = '33[91m'

# Change this to point to your geoip database file
gi = pygeoip.GeoIP('/opt/geoipdb/geoipdb.dat')

if len(sys.argv) != 2:
print 'Usage is ./pcap-geoip.py pcap-file'
print 'Example - ./pcap-geoip.py sample.pcap'
sys.exit(1)

pkts = rdpcap(sys.argv[1])

def geoip_locate(pkts):

ip_raw = []
# Exclude "local" IP address ranges
ip_exclusions = ['192.168.', '172.', '10.', '127.']

for x in pkts:
if x.haslayer(IP):
src = x.getlayer(IP).src
if src != '0.0.0.0':
if src not in ip_raw:
ip_raw.append(src)

for s in ip_raw:
# Check to see if IP address is "local"
if ip_exclusions[0] in s or ip_exclusions[1] in s or ip_exclusions[2] in s or ip_exclusions[3] in s:
print YELLOW + 'Error local Address Found ' + str(s) + END
else:
# Lookup the IP addresses and return some values
rec = gi.record_by_addr(s)
lng = rec['longitude']
lat = rec['latitude']
google_map_url = 'https://maps.google.co.uk/maps?z=20&q=%s,%s' %(lat, lng)
print GREEN + '[*] IP: ' + s + ', Latitude: ' +str(lat)+ ', Longtitude: ' +str(lng) + ', ' + google_map_url + END

geoip_locate(pkts)

And there you have it, enjoy.

Categories: packets, Scapy

Python: Kippo 2 Cuckoo

So I’m a bit late with this blog post as I wrote the code a couple of weeks ago, but as they say “better late than never”.

A couple of weeks ago on Twitter @stevelord raised a question about the existence of some code that would allow you to take the files that evil hackers upload to a Kippo SSH Honeypot and run them through a Cuckoo Sandbox instance. The response was “No” such a thing didn’t exist but wouldn’t it be great if it did.

Well that’s more than enough reason for me to have a go at writing something, the code I put online (via my GitHub repo) is still quite raw, I’ve not had any feedback to say if it works properly or not but the limited testing I did seemed to indicate that it does. So this is how it works.

On your Kippo honeypot upload the python script and ensure that it has read access to your kippo directories. There are a number of variables in the script that might need changing these are:

# User defined settings - change these if they are different for your setup
sandbox_ip = '127.0.0.1' # Define the IP for the Cuckoo sandbox API
sandbox_port = '8090' # Here if you need to change the port that the Cuckoo API runs on
kip_dl = '/var/kippo/dl/' # Kippo dl file location - make sure to include trailing /
file_history = 'history.txt' # Keeps a history of files submitted to Cuckoo
log_file = 'output-log.txt' # Log file for storing a list of submitted files and the task ID from Cuckoo

Most of them should make sense to you, but you will need to change them to match your environment (give me a shout if you get stuck).

Once that’s all in place the script works like this.

1. Creates a new history.txt file if it doesn’t exist (in the same directory as the script is located).
2. Creates a catalogue of files in the download directory for kippo.
3. Checks to see if that file already exists in history.txt (so we don’t submit files more than once).
4. Submits the files one at a time to your Cuckoo sandbox using the Cuckoo api server (which needs to be running).
5. Updates the history.txt fiie with the new files.
6. Dumps out the task id and status code so you know if it’s worked.

And that’s it really, I still need to work on the logging side of things, the log_file variable doesn’t work (as in nothing calls it) so that’s next on my list to resolve. If you deleted the history.txt file it will case the script to submit everything again so be careful with it.

The script can be found HERE.

I hope it may be of use to you, and as always let me know if it doesn’t work (create an issue ticket on GitHub).

Categories: Honeypot, Malware, Python

Scapy – pcap IP rewrite

Hello reader(s), this is just a quick post to share some new code I wrote tonight, you might find it useful or you might not.

So I’ve been trying to think of some new transforms to write for sniffMyPackets and thought it would be cool to take a TCP stream and rewrite the source and destination IP addresses and then recalculate the IP/TCP checksums so you can “replay” the packets if you so desire.

The code is as always written in python and uses Scapy to work the magic, the link to the code is at the bottom of the page, and is stored in another one of my GitHub repo’s specially created to share all the “junk” code I write and want to share with you guys..

The usage is really simple, tell the script you source pcap file (the one that you want to rewrite), give it the new source and destination IP addresses and then finally the location you want to store the output pcap file. For example:

./pcap-rewrite.py input.pcap 192.168.1.1 192.168.1.254 /tmp/output.pcap

Now just to make clear this is to work on individual TCP streams (might even work on UDP thinking about it), so 1 source IP, 1 destination IP. NOT pcaps files with lots of different IP’s.

When you run the script with the variables defined above (did I just use variables in a sentence..too much coding). You should get something like this:

terminalwindow

What you should get inside of your pcap file is something that looks like this (well not like this as it’s my pcap file):

wireshark-new

Which use to look like this:

wireshark-old

Notice all the green in the new rewritten pcap file (the first wireshark image for those not paying attention). That’s because both the IP and TCP checksum have been deleted and then Scapy recalculate it when the IP addresses are changed (thanks to Judy Novak for that bit of useful information).

So without making you read any more here’s the code: DOWNLOAD

Follow

Get every new post delivered to your Inbox.

Join 806 other followers