BSidesLondon, Maltego

Maltego Magic comes to BSides London

I’m a big fan of BSides London, it was the first security conference I ever went to, and this will be my fourth year attending. The last couple of years I’ve been a “crew” member for the event, working in the background to help make the event what we all know and love. Last year I stepped in last-minute to run a Scapy workshop, this year I’ve decided to submit one, on my other favourite thing Maltego.

Below you will find a brief description of the workshop and the things that if you are planning on attending you will need to bring with you.

Maltego Magic – Creating transforms & other stuff

In this workshop I will teach people how to write their own Maltego transforms. Using simple to understand examples (and pictures, everyone likes pictures) I will lead the participants through the process of creating local and remote transforms using just a pen and paper (ok a laptop is needed as well).

A basic knowledge of Maltego & Python is needed but the workshop will be aimed so that anyone can benefit from the magic that is Maltego even if they haven’t coded anything before.

Requirements for the day

  • Laptop (Mac OSX, Windows or Linux)
  • Python installation (2.7 or above, not version 3 though)
  • The Python Requests library (sudo pip install requests)
  • Maltego (CE edition is ok)
  • A text editor that’s Python friendly or a Python IDE (Sublime Text, PyCharm etc)
  • Your imagination (borrow someone else’s if necessary)

The workshop information for the day is below:

Date: June 3rd 2015
Workshop: 3
Track No: 2
Duration: 2 hours
Schedule for: 14:00 – 16:00

If you are interesting in writing Maltego transforms come along to the workshop, if you can’t make it I will be wandering around the con all day so feel free to stop me and we can have a chat about Maltego Magic.

Maltego, OSint

Media Monkey – Social Media transforms for Maltego

I’ve spent the last few weeks working on a set of Maltego transforms, the idea being that hopefully they will allow you to query more Social Media sites and get useful information back from them. Now for a change they aren’t local transforms (meaning you have to install them), instead I’ve made them all TDS transforms which makes it easier for you (and more work for me).

Not wanting to break from my silly naming conventions they are named “Media Monkey” slightly inspired by the Chaos & Security Monkey created by Netflix (but in no way associated to them).

Now before you get all excited and starting using them, there are some conditions..

1. Still a work in progress – basically more will be added, some might be changed (or removed).
2. They run on a small server – I’m paying for this so at the moment there isn’t a lot of grunt behind them. Sorry but it’s free so I’m hoping that’s enough for you.
3. All the transforms are over HTTPS and I don’t log anything, unless I turn on debugging for development purposes. IF I do, there is a small chance I might grab a request, but to be honest I’m not interested in what you search for.
4. The most important one.. I am in no way, shape or form responsible for how you chose to use these transforms. Act responsibly (we are all adults) and if you get yourself into trouble its your own fault not mine.

The transforms at the moment cover the following:

GitHub (you need an API key)
Gravatar (surprising how much stuff people put in those profiles)
Amazon Wishlists
BT Phone Book (UK only, still working on it)
BitBucket (this one will likely change soon)

I’ve got loads more to add so the list will change over time.

You can find all the documentation (yes yes I wrote some documentation) HERE

I’m always looking for feedback (good or bad) so if you have ideas, suggestions or even complaints (although I reserve the right to ignore those).



Maltego – gotFlow (Netflow for Maltego)

Recently I was asked to see if I could create some Maltego transforms to provide a quick analysis of Netflow data. Always up for a challenge (and to feed my Maltego addiction) I created gotFlow, which is based on the Canari Framework (for rapid Maltego transform generation).

gotFlow is designed to support (currently) nfdump and should still be classed as an “early release” (meaning more to come). It’s a nice simple transform set with only 3 transforms, 3 entities and 1 Maltego machine.


The transforms process works as follows:

nfdump file -> source ip -> destination ip -> destination port

The source and destination IP’s are the Maltego IPv4 Address entities allowing you to run additional transforms against them.

To get started you can either add a single nfdump file or import nfdump files from a directory.


From here you can run the ‘[NF] – Import Files’ tranforms that will import all the nfdump files from the chosen directory.

gotFlow-Import Files

Once that’s run you should (depending on the number of nfdump files) get something that looks like this.


You can now either run the Maltego machine against the files or run the transforms seperately. For the purpose of this blog post I’ve cheated and used the machine.


The Machine runs the following transforms, feeding off the return entities generated by the transform before it.

[NF] – Get Source IP
[NF] – Get Destination IP
[NF] – Get Destination Port

What you end up is something like this:


Now I’ve tried to make this a easy to determine traffic type and size by the art of colour coding (very high tech).

TCP Traffic – Red lines
UDP Traffic – Blue lines
ICMP Traffic – Green lines

The thickness of the line between the source IP and destination IP is the size of the flow. The returned value is in bytes which I convert to kilobytes (bytes / 1000). If the line is thin (the default) it means its below 1 kilobyte.

The only configuration change you need to make before you run gotFlow is to define the location of the nfdump executable which needs to be added to:


You can find the transforms here:

Any questions, queries or suggestions let me know (email or raise an Issue on GitHub)

Maltego, sniffMyPackets

sniffMyPackets V2: Database or not??

When I started the work on sniffMyPackets version 2 I decided to make it default, to using a database backend. The decision around this was based on trying to get the most out of the pcap files without crowding the Maltego graph. I knew at the time that this means that people who want to use my code would have to have additional infrastructure for it to run. I’ve tried to minimize the pain by introducing a Vagrant machine that will build a MongoDB database instance and install the new web interface (which is awesome).

Recently I was asked if you “HAD” to use a database and I realised that this decision, might limit the number of people who will use sniffMyPackets. Which that in mind I have now rewritten the Maltego transforms to work with or without a database. The limitation is that don’t get to use all the transforms (such as replaying a session) but you still get the same output on the Maltego graph.

The changes have been pushed to the GitHub repo which is linked below:

By default the database support is switched off, the good news is that it only takes one change to enable it. When you create the Canari profile a sniffmypacketsv2.conf file is created in the src/ directory. So for example:

canari create-profile sniffmypacketsv2 -w /root/localTransforms/sniffmypacketsv2/src

This creates the sniffmypacketsv2.conf file in the /root/localTransforms/sniffmypacketsv2/src directory. The configuration file comes preconfigured with several options most of which you can change to meet your needs. The important one for the database support is under the [working] section and is called ‘usedb’ (see pictures below).

To enable database support, simply change the value from 0 to 1 and you are good to go (so to speak). You can turn it on and off whenever you want.

Database Support Off:


Database Support On:


Any Maltego transform that won’t work without the database just returns a message to your Output window about needing database support. To be honest most of the important ones work without database support so you should notice much difference.

If you decide not to use the database then you will be missing out on this cool looking web interface (see below)


Over the next few weeks, more Maltego transforms will be created/added and I’ve got some cool features planned in terms of Maltego TDS based transforms and for the web interface as well. I’m also going to start working on adding NetFlow support to sniffMyPackets as well so you can throw even more stuff into the mix.

I will also be working on the documentation and will run a blog series on some of cool features you might not notice straight away.

Oh my online demo site is still online (which sometimes stops working but that’s not my code more the server) and you can find it at:

As always feedback is welcome (good or bad).


Maltego, sniffMyPackets

sniffMyPackets Version 2 – The Release

If you follow me on Twitter you have probably noticed me bombarding you with tweets about the next release of sniffMyPackets, well today it’s officially released in a Beta format. There is still a lot of work to be done and over the next few weeks/months expect a lot of changes. The purpose of this blog post today is to give you an overview about what this new version is all about, my hope is that you download it, try it and let me know what you think.


This is a beta, I’ve tested the code as much as possible but you might still have issues, if you do please raise an Issue ticket on the respective GitHub repo.

What’s New:

The original sniffMyPackets was the first piece of code I wrote and it was really more a journey of discovery than an attempt at producing anything that could be classed as “production ready”. This release of sniffMyPackets is my attempt at turning it into something that is not only useful (well hopfully) but also that can be used and shared among analysts within an organisation.

There are two main changes to this release (aside from better Python code), the first is the use of a database backend (currently MongoDB), the second is the introduction of a web interface. These two components work in harmony with the Maltego transforms to give you more information without overwhelming the Maltego graphs but still giving you the visibility you need from within Maltego.


The database is used by the Maltego transforms to store additional information when transforming a pcap file. When building Maltego graphs you need to be able to tell a story but at the same time not overwhelm the graph with too much information (especially when building large graphs). Each transform when run not only returns a Maltego entity but also writes additional information back into the database. You can then use this information to “replay” a pcap file without needing the actual pcap files (just the Session ID).

For example, when you first add a pcap file into a blank graph you need to index it. This process generates a unique Session ID for that pcap file (based on MD5 hash) which is what is returned to your Maltego graph. What also happens is that additional information is written to the database, which is then displayed in the web interface. Below is the information collected from just that single transform:

  • MD5 Hash
  • SHA1 Hash
  • Comments
  • Upload Time
  • File Name
  • Working Directory
  • Packet Count
  • First Packet
  • Last Packet
  • PCAP ID (or Session ID)
  • File Size
  • While you may think this slows the transform down, I’ve made ever attempt for that not to be the case. A lot of the Python code for this release has been rewritten to speed execution and remove redundant code.

    Web Interface:

    Personally this part has been the greatest challenge for me (but at the same time it’s awesome), the whole purpose behind this addition is to allow you to share the analysis of a pcap file with anyone. As you run transforms from Maltego on a pcap file the web UI is populated with the “detailed” information. Even if you close the Maltego graph and delete the pcap file that information is still there and you can still “replay” that back into Maltego (cool or what..).

    The web UI has some cool features:

    File upload functionality so you can upload pcap files or artifacts from Maltego to the web server to allow for other people to download them.
    Full packet view, when a TCP/UDP stream is indexed in Maltego you can view each individual packet from the web interface (based on Gobbler).

    Other Stuff:

    A lot of the Python code has changed, for example the old method of extracting TCP & UDP streams using tshark has been replaced with pure Python code, this means it’s quicker and removes the need to have tshark/wireshark installed (and stops issues arising when the command syntax changes). The way that sniffMyPackets checks the file types has been turned into a Python function, likewise the way that we check to make sure a pcap file isn’t a pcap-ng format is now a pure Python function.

    I’ve also started to write 3rd party API plugins as well, currently you can submit an artifact MD5 hash to VirusTotal to see if there is an existing report available. In the future the functionality to upload the file to VirusTotal will also be added. I’m also planning on adding CloudShark functionality so you can upload files to a CloudShark appliance or their cloud service.

    Vagrant Machine:

    Now one of my concerns when I decided to add a web interface and a database backend was whether or not people had the infrastructure available to do accomodate this, so in order to try and make it more accessible I created a Vagrant machine. Vagrant is an awesome tool if you do development work, it allows you to script the creation and build of a machine that is repeatable and gives the same results time after time. Within the sniffMyPackets-web repo there is a vagrant folder. If you have Vagrant & Virtualbox installed (required I’m afraid), you can simply type (from within that vagrant folder):

    vagrant up

    This will create a Linux server, download, install and configure the MongoDB database and then the required libraries for the web interface before starting it running. The initial vagrant up can take about 20 minutes (as it has to download stuff) but after that your vagrant up will have your server ready in minutes.

    Maltego Machines:

    There are currently two Maltego Machines available. Both serve a different purpose (kind of obvious I know).

    [SmP] – Decode pcap: This is run after you have indexed a pcap file, it will extract all the TCP/UDP streams, IPv4 Addresses, DNS requests, HTTP requests etc. straight into your Maltego graph.

    [SmP] – Replay Session (auto): If you want to replay a pcap file you can add a Session ID entity (with a valid Session ID), return the pcap file and then run this Maltego Machine against the pcap file. This runs every 30 seconds (useful if you the community edition of Maltego) and will extract all the information in the database for that pcap file into a Maltego graph. This is awesome if you don’t have the original pcap files or what to review information at a later stage that is already in the database.

    How to get it:

    The two GitHub repo’s can be found here:

    Maltego Transforms:
    Web Interface:

    I’m in the process of adding documentation and all that fluffy stuff so bear with me, but if you’ve used sniffMyPackets before then install instructions are the same.

    If you want to see it in action there is YouTube video below:

    packets, Scapy

    Scapy: Sessions (or Streams)

    I’m in process of rewriting sniffMyPackets version 2 (and yes I’m actually doing it this time), part of work involves doing a complete rewrite of the Scapy based code that I used in the original version. This isn’t to say anything is wrong with the original version but my coding “abilities” has changed so I wanted to make it more streamline and hopefully more modular.

    In the original sniffMyPackets I used tshark to extract TCP and UDP streams, which wasn’t the ideal solution however it worked so.. This time around I’m hoping to cut out using tshark completely so I’ve looking at what else Scapy can do. There is a “hidden” (well I didn’t know it was there) function called Sessions, this takes a pcap file and breaks it down into separate sessions (or streams). The plus side of this is that it’s actually really quick and doesn’t limit itself to just TCP & UDP based protocols.

    I’m still playing with the function at the moment but I’m hoping that I can use this for sniffMyPackets, so I thought I would share my findings with you.

    First off lets load a pcap file into Scapy.

    >>> a = rdpcap('blogexample.pcap')

    This is just a sample pcap file with a ICMP ping, DNS lookup and SSL connection to

    If you want to check the pcap has loaded correctly you can now just run.

    >>> a.summary()

    Cool so lets load the packets into a new variable to hold each the sessions.

    >>> s = a.sessions()

    This loads the packets from the pcap file into the new variable as a dictionary object, which I confirmed by during.

    >>> print type(s)
    <type 'dict'>

    Now lets get a look at what we have got to work with now.

    >>> s
    {'UDP >': ...SNIP...

    That’s just the first session object from the pcap and it contains two parts (as required for a dictionary object). The first is a summary of the traffic, it contains the protocol, source IP address and port (split by a colon) and then the destination IP address and port (again split by a colon). The second part of the dictionary object is the actual packets that make up the session.

    Lets have a look at how to make use of the first part of the dictionary object (useful if you want a summary of the sessions a pcap file).

    >>> for k, v in s.iteritems():
    >>> print k
    UDP >
    UDP >
    TCP >
    UDP >

    This is a nice and easy iteration of the dictionary object to get all the key within it (the summary details). You can of course slice and dice that as you see fit to make it more presentable. The other part of the sessions dictionary is the actual packets, using a similar method we can iterate through the sessions dictionary and pull out all the raw packets.

    >>> for k, v in s.iteritems():
    >>> print v

    This will output a whole dump of all the packets for all of the sessions which isn’t that great. Lets see if we can make it a bit more friendly on the eye.

    >>> for k, v in s.iteritems():
    >>> print k
    >>> print v.summary()
    TCP >
    Ether / IP / TCP > SA
    Ether / IP / TCP > A
    Ether / IP / TCP > A / Raw
    Ether / IP / TCP > A / Raw
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > A
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > A
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > A
    Ether / IP / TCP > PA / Raw
    Ether / IP / TCP > PA / Raw

    Now isn’t that much better, as you can see from the snippet above the session it’s captured starts at the SYN/ACK flag (probably because I started my capture after the SYN was sent) so in reality (and with a bit more testing) it looks like the Scapy Sessions function does allow for extracting TCP streams (well all streams really).

    Lets have a go at something else.

    >>> for k, v in s.iteritems():
    >>> print v.time
    AttributeError: 'list' object has no attribute 'time'

    Oh well I was hoping that I could get the time stamp for each packet but it seems that you need to do some extra work first.

    >>> for k, v in s.iteritems():
    >>> for p in v:
    >>> print p.time, k
    1417505523.55 UDP >
    1417505522.45 UDP >

    Oh that works much better, because the packets in the second half of the dictionary are a list object we need to iterate over them to get access to any of the values within them (such as the packet time).

    So there you go, I’m off to see what else I can do with this..