Category Archives: nviso

Using to detect files touched by malware

Yesterday, we released – a tool you can use to detect unwanted changes to the file sytem. The tool and documentation is available here:

Binsnitch can be used to detect silent (unwanted) changes to files on your system. It will scan a given directory recursively for files and keep track of any changes it detects, based on the SHA256 hash of the file. You have the option to either track executable files (based on a static list available in the source code), or all files. can be used for a variety of use cases, including:

  • Use to create a baseline of trusted files for a workstation (golden image) and use it again later on to automatically generate a list of all modifications made to that system (for example caused by rogue executables installed by users, or dropped malware files). The baseline could also be used for other detection purposes later on (e.g., in a whitelist);
  • Use to automatically generate hashes of executables (or all files if you are feeling adventurous) in a certain directory (and its subdirectories);
  • Use during live malware analysis to carefully track which files are touched by malware (this is the topic of this blog post).

In this blog post, we will use during the analysis of a malware sample (VirusTotal link:

A summary of options available at the time of writing in

usage: [-h] [-v] [-s] [-a] [-n] [-b] [-w] dir

positional arguments:
  dir               the directory to monitor

optional arguments:
  -h, --help        show this help message and exit
  -v, --verbose     increase output verbosity
  -s, --singlepass  do a single pass over all files
  -a, --all         keep track of all files, not only executables
  -n, --new         alert on new files too, not only on modified files
  -b, --baseline    do not generate alerts (useful to create baseline)
  -w, --wipe        start with a clean db.json and alerts.log file

We are going to use to detect which files are created or modified by the sample. We start our analysis by creating a “baseline” of all the executable files in the system. We will then execute the malware and run again to detect changes to disk.

Creating the baseline


Command to create the baseline of our entire system.

We only need a single pass of the file system to generate the clean baseline of our system (using the “-s” option). In addition, we are not interested in generating any alerts yet (again: we are merely generating a baseline here!), hence the “-b” option (baseline). Finally, we run with the “-w” argument to start with a clean database file.

After launching the command, will start hashing all the executable files it discovers, and write the results to a folder called binsnitch_data. This can take a while, especially if you scan an entire drive (“C:/” in this case).


Baseline creation in progress … time to fetch some cheese in the meantime! 🐀 🧀

After the command has completed, we check the alerts file in “binsnitch_data/alerts.log”. As we ran with the “-b” command to generate a baseline, we don’t expect to see alerts:

Capture 2.PNG

Baseline successfully created! No alerts in the file, as we expected.

Looks good! The baseline was created in 7 minutes.

We are now ready to launch our malware and let it do its thing (of-course, we do this step in a fully isolated sandbox environment).

Running the malware sample and analyzing changes

Next, we run the malware sample. After that, we canrun again to check which executable files have been created (or modified):


Scanning our system again to detect changes to disk performed by the sample.

We again use the “-s” flag to do a single pass of all executable files on the “C:/” drive. In addition, we also provide the “-n” flag: this ensures we are not only alerted on modified executable files, but also on new files that might have been created since the creation of the baseline. Don’t run using the “-w” flag this time, as this would wipe the baseline results. Optionally, you could also add the “-a” flag, which would track ALL files (not only executable files). If you do so, make sure your baseline is also created using the “-a” flag (otherwise, you will be facing a ton of alerts in the next step!).

Running the command above will again take a few minutes (in our example, it took 2 minutes to rescan the entire “C:/” drive for changes). The resulting alerts file (“binsnitch_data/alerts.log”) looks as following:


Bingo! We can clearly spot suspicious behaviour now observing the alerts.log file. 🔥

A few observations based on the above:

  • The malware file itself was detected in “C:/malware”. This is normal of-course, since the malware file itself was not present in our baseline! However, we had to copy it in order to run it;
  • A bunch of new files are detected in the “C:/Program Files(x86)/” folder;
  • More suspicious though are the new executable files created in “C:/Users/admin/AppData/Local/Temp” and the startup folder.

The SHA256 hash of the newly created startup item is readily available in the alerts.log file: 8b030f151c855e24748a08c234cfd518d2bae6ac6075b544d775f93c4c0af2f3

Doing a quick VirusTotal search for this hash results in a clear “hit” confirming our suspicion that this sample is malicious (see below). The filename on VirusTotal also matches the filename of the executable created in the C:/Users/admin/AppData/Local/Temp folder (“A Bastard’s Tale.exe”).

Screen Shot 2017-05-17 at 00.28.05.png

VirusTotal confirms that the dropped file is malicious.

You can also dive deeper into the details of the scan by opening “binsnitch_data/data.json” (warning, this file can grow huge over time, especially when using the “-a” option!):


Details on the scanned files. In case a file is modified over time, the different hashes per file will be tracked here, too.

From here on, you would continue your investigation into the behaviour of the sample (network, services, memory, etc.) but this is outside the scope of this blog post.

We hope you find useful during your own investigations and let us know on github if you have any suggestions for improvements, or if you want to contribute yourself!

Squeak out! 🐁


NVISO at Hack Belgium

Last week, a few of us attended the first edition of Hack Belgium. Describing what Hack Belgium is turns out to be a bit difficult: is it a conference? A hackaton? A hands-on workshop? A technology fair? A pitch? In my view, it’s all of those combined – and that made the event so interesting!

2017-05-04 16.55.56.jpg

9AM – Daan, Jeroen, Kris & Kurt looking sharp at Hack Belgium ( … and ready for some ☕️!).

Hack Belgium offers its attendees 14 crucial challenges around topics that keep our world busy: from healthcare to mobility and building intelligent cities. Over the course of 3 days, all attendees put their heads together to come up with a bright solution to these challenges. With the help of more than 300 experts, it’s certainly a great experience and exercise in idea generation! The objective is to pitch your idea on Saturday and to walk out of Hack Belgium with a blueprint to start working on your project.

2017-05-05 12.19.15.jpg

Our CEO Kurt ready to escape reality 🕶.

OK, I have to confess – we did not stay until Saturday for our pitch, however we did have a lot of fun and interesting moments on Thursday and Friday! We worked on a concept for the media to bring news in a more balanced format, we did a workshop on designing connected projects, we attended an expert session on the state of Virtual Reality (VR) and Augmented Reality (AR), and we had fun playing around with IBM Watson and other upcoming technologies.


Kris showing off his drawing skills 🐁.

Pulling off this type of event at this scale takes a lot of preparation (and guts!) – we want to congratulate  the Hack Belgium staff and all the volunteers involved! You truly made this into an interesting & worthwhile event.

We hope to be present for Hack Belgium 2018 and beyond (… and we will stay for the pitch, promise 🐀!).

A few more impressions after the break!


Proud to say we didn’t lose any fingers playing around with these beasts! 🖐


Beep Boop Beep Boop

2017-05-05 15.43.58.jpg

Playing around with creative ways to design connected products during a workshop. My drawing skills are inferior to the one from Kris, so no close-up 😃!

Let’s get the team together…

It was the last week of April: our entire NVISO team had packed their bags and was ready to board a plane. Where to? A secret location, to celebrate the achievements of our fantastic team !

Did all of you lab rats bring your passports? 🐀

Did all of you lab rats bring your passports? 🐀

Destination: unknown…
From the very beginning, it became clear that the discovery of our destination was a fun team-building event by itself: to find out, we’d have to solve a series of technical challenges eventually lifting the veil on that well-kept secret… right before getting our boarding passes !

In the morning, we were all supposed to meet up at our office. At exactly 9AM, we received a mail from HR containing a URL. The website was created using Drupal and contained a bit of teaser information concerning the offsite. It also had a login form, but we were lacking valid credentials. After some fiddling around and some scanning, we found it was vulnerable to Drupageddon. This allowed us to create a user account using SQL injection. Once logged into the website, we could create posts ourselves. This vulnerability also allowed us to run commands through PHP, but we weren’t able to simply launch a reverse shell. Using a Netcat pipe, we did succeed in getting shell access to the server. The next step was to look for some kind of flag. Some grepping and finding showed us the location of the flag, in a file containing instructions for the next piece of the puzzle.

Maybe we should have brought two computers into office this morning...

Maybe we should have brought two computers into office this morning…

From there on, we were split up in two teams. Team A would remain in Brussels and team B was set off to a gas station near the highway in Breda, in The Netherlands. There, Team B was to find “Lou”. Upon arrival at the gas station, team B inquired for Lou: the lady behind the counter looked at them as if they were about to pull a gun. Looking all over the gas station, Team B eventually identified Lou: the challenge could continue. But what should Team B tell Lou ?

Team A had to assist them: very soon, they found a USB key taped to one of the GoPro action cameras left behind by the organizers to record our endeavors. Forensic analysis was on! After booting kali, performing some volatility magic, deciding it took too long and running strings on the dump file, Team A discovered the passphrase that should be given to Lou at the gas station.

Once Team B provided the correct passphrase to Lou, he gave the next set of instructions for both Team A and Team B. Through an image puzzle, Team B found out they had to carry on towards Schiphol, the Amsterdam Airport. Lou would be there, somewhere, ready to hand out the next hint. Meanwhile, Team A were told they should find an envelope at the office. After flipping over all the tables, the envelope was found : it contained yet another USB key. This time, the USB key contained an encrypted zip file with a PCAP file inside. After putting its youngest new recruit in front of the computer in true Swordfish-first-scene style, Team A cracked the password and started analysis of the PCAP file. Captured traffic in the PCAP consisted of web browsing traffic towards the website of Brussels International Airport: the hint was clear, Team A rushed to the airport !

The destination? Dubai!

Our precious bird, watching over the Burj...

Our precious bird, watching over the Burj…

Our time in the City of Endless Possibilities
Taking some time to reflect is important. Taking some space (literally) helps to step back and look at the bigger picture. While we did reflect on where we had come from, our eyes were decidedly focused on the future. We spent quite some time discussing what we stand for as individuals and as a team: we discussed which values we want to share and live by, and how these values can make NVISO better, both for us and for our clients. The conversation resulted in valuable insights. Putting words on what we believe in, together, made everyone feel committed to upholding them, because they are what we believe in, and represent us best.

To then put our money where our mouth was, the rest of our time was invested in taking concrete actions: we set off to select one initiative that would help NVISO improve in practice. Four teams together proposed 8 ideas, which were challenged and judged by a ‘shark tank’, our very own jury.


The proposal attracting the most support was an initiative on internal sharing of knowledge between colleagues. So in the coming months, we will be working to build a framework that supports and promotes informal sharing of experiences and skills within NVISO. Because sharing is caring!

The winners of the Shark Tank 2017 - congratulations Hans, Benoit, Mercedes, Nico and Jeroen!

The winners of the Shark Tank 2017 – congratulations Hans, Benoit, Mercedes, Nico and Jeroen!

But let’s not fool ourselves: the trip was not all hard work. We also found time to enjoy the local attractions of Dubai and have lots of team fun. Loyal to the good old “work hard, play hard” motto, and believing in laughter as a great way to bond with colleagues, we rushed down crazy water slides in Aquaventure, chilled at the local beach and were inspired to aim higher at Burj Khalifa. In short, we made the most of our time there, enjoying some well-deserved rest, having fun and getting to know each other better as a great team. After all, we don’t travel to the City of Endless Possibilities every week!

Aarg ... we should have taken this picture before sunset! 🐀 😁

Aarg … we should have taken this picture before sunset! 🐀 😁

Mitigation strategies against cyber threats

So it’s been a good 2 months since we have been in business! We thought we’d to take some time to reflect on these two months, in which we’ve seen quite some interesting security news including the well-known Mandiant report on APT1 and the widespread Java chaos.

Last week, ENISA published a “Flash Note” on Cyber Attacks, urging organizations to take precautions to prevent cyber attacks. In order to protect against cyber threats, they provide the following recommendations:

  • Protect targets to avoid weaknesses being exploited by adversaries: prevention should be the primary defence against attacks.
  • Consider the use of more secure communication channels than e-mail in order to protect users from spoofing and phishing attacks.
  • Proactively reduce your attack surface by reducing the complexity of software installed on user devices and reducing the permissions of users to access other devices, services and applications by applying the principle of least privilege.
While we believe these are good recommendations, we are also convinced that it is often not easy for organizations to practically implement this type of recommendations. 

An interesting, more practical perspective on defending against cyber threats is provided by Australia’s DSD (Defence Signals Directorate). In the same spirit of the “SANS Top 20 Critical Controls“, the DSD has summarized a top 35 of mitigation strategies to protect against cyber threats – a ranking based on DSD’s analysis of reported security incidents and vulnerabilities discovered on Australia’s government networks.

That top 35 is of course just another large list of security controls that should be implemented. Interestingly enough however, the DSD noticed that 85% of all cyber threats can be mitigated by only implementing the top 4 of these 35 strategies. This top 4 boils down to the following controls:

Let’s face it: with malware developing at the pace it currently does, it is unrealistic to expect that antivirus solutions will offer in-depth protection against every single malware infection. They are a good and necessary first step, detecting older and known malware, but the most effective method is not trying to detect malware: it is to allow only known, valid programs to run.

While application whitelisting clearly is the “way to go” with regards to security, it’s often not an easy choice for various reasons (“Damn you IT, this is not workable!”, “We are a company with an open culture, we don’t lock down our desktops”, “We trust our employees”,…). If you ever manage to convince your organization to consider application whitelisting, another daunting challenge awaits the IT administrator: creating and managing the application white list.

Now, establishing what ‘valid’ applications are for your environment is no easy task. Next to the usual client-side applications such as PDF readers and other office software, the list of approved software will also include applications specific to your environment. So how do you about creating / managing this type of whitelist? Here’s a couple of ideas:

  • To create your initial white list, a good idea would be to use a normal employee workstation and list all of the installed applications. In a reasonably static environment, this could be a good approach, seeing as normal employees wouldn’t be able to install new applications themselves anyhow.
  • A less “strict” approach would be the use of “software signing”. You could protect your environment by only allowing software signed by trusted developers / vendors to be installed.

If you cannot implement application whitelisting in a “blocking” mode, consider configuring it to only monitor (not block) applications that are not on the white list. This will provide you with additional insights on the software running in your environment, without further restricting your employee workstations.

When it comes down to tooling, there’s several commercial solutions available. For typical large Windows environments, a good solution is AppLocker which further builds on the older “Software Restriction Policies”.

This is where it all starts. We are still seeing too many organizations that lag behind when it comes to the roll-out of security patches for operating systems and server software. 

During the majority of our security assessments and penetration tests, we are frightened by the number of hosts that are missing 6-months old critical security patches. Unfortunately, any one of these critical risks is usually enough to compromise an entire Windows domain.

In order to keep abreast of new security vulnerabilities, organizations should set up an effective patch management process. I’m sure most of you have this type of process, so let’s take a closer look and see whether your process includes:

  • A security patch assessment that verifies to what extent security patches are applicable to the organization and what the actual impact is;
  • A release cycle for security patches that includes proper testing and a controlled release;
  • An emergency process for urgent and critical patches (based on the criticality of the patch, you should consider what the biggest risk is: following the process and increasing your exposure window or immediately applying the patch and risking availability issues?);
  • An overview of all systems and their configuration to ensure all concerned systems are patched;
  • A process for temporary workarounds;
  • A system that provides metrics and monitors the effectiveness of the process itself.
Server-side vulnerabilities usually receive the most attention from organizations, even though patching client applications deserves just as much focus. Using security’s weakest link (humans), attackers often use a combination of trickery (e.g. spear phishing) to exploit vulnerabilities in client applications.

Just consider the following vulnerabilities for JRE (the Java Runtime Environment):

Java Runtime Environment (JRE) vulnerability overview

These vulnerabilities often remain “hidden”, as your typical network vulnerability scanners (e.g. Qualys, NeXpose, Nessus,…) will not pick them up by default.

There’s a couple of ways of identifying what software is running on your client machines:

Network scanners can usually be configured to actually authenticate to hosts in your environment and detect vulnerabilities in the installed client software. Although this type of configuration works well and provides you with the required information, it is something that should be implemented and monitored very carefully. After all, you are providing an automated vulnerability scanner with (administrative) credentials to your client machines. I’m sure most of you had some incidents in the past where a vulnerability scanner did some things you didn’t expect (e.g. use an “unsafe check” that takes down a vulnerable server). Now imagine the havoc a scanner could cause when he has administrative privileges to 90% of your network.

Install “application monitoring” software on your client machines. An alternative to the above approach is to install a small monitoring tool on your client machines that sends information on installed software to a central server. You can then centrally track the different software versions in your network. A possible pitfall here could be privacy issues that would prevent you from installing this type of “monitoring software” on your employee machines, so it’s something to first check with your legal department.

Once a system has been compromised (in any way possible), an attacker will attempt to further escalate his privileges both on the system and inside the victim network. To minimize and to track administrative privileges will help you contain malware infections and compromises (which will eventually occur in any environment).

As a good example of this, many IT organizations have an abundance of administrative privileges attributed to IT staff (e.g. the majority of IT staff has domain administrator privileges). It is understandable that “the business needs to keep running”, but there’s a couple of ways to improve the situation without “crippling” system administrators:

  • Provide system administrators with two accounts: one for normal daily usage and one that can be used for administrative tasks.
  • Review administrative accounts and apply the least privilege principle, i.e. system administrators often require elevated privileges, but not “Domain Administrator”. 
  • Review application accounts and apply the least privilege principle, i.e. do not let your applications run with SYSTEM privileges.
  • Implement and enforce a proper IAM process for administrative accounts that will monitor the spread and usage of administrative accounts.

Over the years, a number of organizations invested time in creating “security control lists” to assist organization’s in minimizing their exposure to cyber threats. While most of these are good, we believe Australia’s DSD does a very good job in defining 4 concrete and clear mitigation strategies that can be applied to most organizations.

It’s clear that all of these controls deserve a dedicated blog post that provides additional insights in how they could be implemented. We do hope that this post has given you some food for thought and we’ll make sure to regularly provide you with more insights. 

If you’re interested in more NVISO updates and info on how we can help you, please visit

Mandiant APT1 report