Write-up on Blockchain data exfiltration (CSCBE18 qualifiers) challenge

This article describes the analysis of data exfiltration using blockchain as it was used in a challenge for the CSCBE 2018 qualifiers. The Cyber Security Challenge Belgium (CSCBE) is a typical Capture-The-Flag (CTF) competition aimed at students from universities and colleges all over Belgium. All of the CSCBE’s challenges are created by security professionals from many different organisations. This challenge was created by Kris Boulez, one of NVISO’s employees, during his commutes and it allowed him to play around with blockchain and cryptography, and improve his Golang and Perl skills.

The challenge consisted of three subchallenges for which a single pcap file was provided

Each of these subchallenges contains a separate flag, has increasing complexity and  builds on the previous one. We’ll start with a general analysis of the network traffic and then dive into each subchallenge ad describe how to get different flags.

A write-up by one of the participants was published shortly after the Qualifiers describing a solution based on enumeration. Here a solution is given solely based on reversing the algorithms.

A Github repository (KrisBoulez/CSCBE18-dataexfill) contains all the code for setting up and running this challenge. It also includes some scripts that were used to perform the analysis in this write-up.

Generic pcap analysis

All requests are GET and only two type of URL’s are accessed on the same server, paths starting with /block and /address

The UserAgent is “Satoshi:0.15.1” -> could this challenge be linked to Bitcoin ?

To list all HTTP GET requests

# tshark -Y 'http.request.method == "GET"' -r data_exfil.pcap

To extract all the results from the response packets (further analysis is based on this extraction)

# open pcap file in Wireshark / File / Export objects / HTTP

/block request – responses

A total of 11 different /block requests are made. All block indices requested, are in the range 312300 – 312399 [all other /block requests for this range would return a block from the BTC blockchain and could have been used for solving the challenges]

The response is JSON formatted and a cursory examination indicates this is part of the Bitcoin blockchain. Online searching the BTC blockchain, shows this is indeed a single block.


The content of all the blocks served, does correspond with the online version of the blocks.


[As the /blok requests/responses seem genuine, from here on we will only discuss the /address requests/response. So “all the requests” must be read as “all the /address requests”]

Blockshark (Flag 1)

One of your employees is believed to be leaking information to a competitor in return for Bitcoins. Fortunately, we were able to capture network traffic when he was exfiltrating data!

It is believed he was transferring a super secret nota which starts with the following string: Top Secret !!!

The last GET request seems special (the address part is longer then all the others)

# tshark -Y 'http.request.method == "GET"' -r data_exfil.pcap | grep address
 3154 127.666978 → HTTP 238 GET /address/4808354973345d9b3e485f2a58210409c514042030042a3e48201f48902c58b0285d570f5d9f04?format=json HTTP/1.1
 3180 129.848436 → HTTP 238 GET /address/5d58091f4f093d660758982e1fb82209e72f585c3e1fe51d09371c5d593c2e200c040334192116?format=json HTTP/1.1<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
 3205 131.033759 → HTTP 310 GET /address/5d5119494e31581e35585e3e04fe065d6d3504f404480e3b048c0a5d441309fb021fb00c588714042e3b58842d1f811019242f5d9e211fef3a486b2b09cd39044a255dee025d280304980c?format=json HTTP/1.1

Looking at the exported files the file extracted from this last requesst is also special (small size)

‑rw‑r‑‑r‑‑ 1 xyz zyx 19385 Feb 8 18:28 4808354973345d9b3e485f2a58210409c514042030042a3e48201f48902c58b0285d570f5d9f04%3fformat=json
‑rw‑r‑‑r‑‑ 1 xyz zyx 19385 Feb 8 18:28 5d58091f4f093d660758982e1fb82209e72f585c3e1fe51d09371c5d593c2e200c040334192116%3fformat=json
‑rw‑r‑‑r‑‑ 1 xyz zyx 206 Feb 8 18:28 5d5119494e31581e35585e3e04fe065d6d3504f404480e3b048c0a5d441309fb021fb00c588714042e3b58842d1f811019242f5d9e211fef3a486b2b09cd39044a255dee025d280304980c%3fformat=json

Examining this last result file in detail we see


The return_code seems to indicate some sort of success (HTTP 200 range of return codes)

The return_value seems like a Base-64 encoded string (trailing ‘=’), testing this on https://www.base64decode.org/ we get the following result and have found the value for Flag 1

FLag 1: CSCBE{DFHJKJ*&(*UYTG%#$243} 

BlocksharkNado (Flag 2)

This challenge is the follow up of Blockshark

The server the exfiltrator was communicating with has been seized and a copy of the software has been installed on:

To obtain the flag, only the same type of traffic as the exfiltrator is needed/allowed. To prove your knowledge send the following message to the server: “gimme the second flag …” (minus the ” quotes)

To find Flag 2 we have to ask “gimme the second flag …” , so we will need to at least partly understand the communication protocol. All other (apart from the last one which we analysed above) /address requests seem very similar wrt to size of request and response. Let’s look at the first one

1393 17.270151 → HTTP 238 GET /address/1f620e09863c3d9321492c06096b3e492012489b3b5dc03d3d6d305d082d5d9e2148301109a211?format=json HTTP/1.1

The /address requests do not seem to be in line with the info in the public BTC blcokchain. A typical address request for the real Blockchain is


In regular BTC requests, the address part is base56 encoded, while here the address part only consists of hex chars (the real address can be found in the “address” field of the returned json file).

Comparing an /address file from the capture and comparing it to the real address response, we find two additional fields

$ diff 1f620e09863c3d9321492c06096b3e492012489b3b5dc03d3d6d305d082d5d9e2148301109a211%3fformat\=json ../../19Ue2gkjFb7K3nodHSpz3DUqrVUNeNNkj2.json
< "return_code":200
< "return_value":"VG9wIFNlY3JldCAhIQ=="

Building on the knowledge we gathered from analysing the last packet of the series which gave us  Flag 1:

  • return_code: looks like an HTTP status response
  • return_value: a Base64 econded value, which decodes to “Top Secret !!“, which coincides with the first line of text the intruder is sending

From the analysis of the “return_value” of the first /adress file we know it encodes the text

Top Secret !!  (13 characters, including spaces)

and the request is


(78 hex chars = 13 hextets)

Rewriting this for legibility

1f620e T
09863c o
3d9321 p
492c06 [space]
096b3e S
492012 e
489b3b c
5dc03d r
3d6d30 e
5d082d t
5d9e21 [space]
483011 !
09a211 !

It appears not be a simple substitution  cipher as both 492012 and 3d6d30 code for “e“.

But we now have enough ciphertext available to create the message for Flag 2 (“gimme the second flag …”). By looking at all the address requests and correlating with the return_value, one of the possible solutions is

g  5d9f04
i  3d2b33
m  3d742a
m  3d742a
e  492012
t  5d082d
h  494e31
e  492012
s  09371c
e  492012
c  489b3b
o  09863c
n  04ee2f
d  58b32e
f  5d9b3e
l  58ab23
a  581e35
g  5d9f04
.  5d973b
.  5d973b
.  5d973b

Concatenating all these values and submitting it against the server

$ curl ‑A Satoshi:0.15.1


And by Base64 decoding the return_value we get the following result and have found the value for Flag 2

 Flag 2: CSCBE{sdfIUerw893475#$&Y&#}  

BlocksharkNado vs BlockSharcopus (Flag 3)

This challenge is the follow up of Blockshark & BlocksharkNado

The server the exfiltrator was communicating with has been seized and a copy of the software has been installed on:

To obtain the flag, only the same type of traffic as the exfiltrator is needed/allowed. To prove your knowledge send the following message to the server: “Dear 0r&cl# (what is Flag3)” (minus the ” quotes)

So, to get Flag 3 we need to send “Dear 0r&cl# (what is Flag3)”  ( minus the quotes) to the server. As a lot of these characters are not present in the available plain/ciphertext we need to reverse the communication protocol

The address part (leaving of the “format=json” part) is 78 chars long, except for the last one which is 150 chars long.

  • 78 can only be divided by 13, 3 and 2;
  • while 150 is also divisable by 3 and 2

The following command allows extracting all the address parts

# tshark ‑Y 'http.request.method == "GET"' ‑r data_exfil.pcap | grep address | awk '{print $9}' | sed ‑e 's/\/address\///' | sed ‑e 's/\?format=json//' > address_reqs

Frequency analysis of the duplets (i.e. two consecutive hex chars) shows that the distribution is not random, we see monotonously increasing numbers, while the duplets which occur more then 40 times are rare.

Duplet #occ
08  18
21  18
20  19
2f  21
19  23
2e  32
49  37
48  48
3d  49
5d  67
1f  70
04  90
58  91
09  93

Indicating where the duplets, which occur more then 40 times, appear in the request we get (only three requests shown)

^                 ^     ^           ^     ^     ^     ^     ^     ^     ^   ^
^     ^     ^     ^     ^     ^     ^     ^     ^     ^           ^     ^
^     ^     ^     ^     ^           ^     ^     ^     ^     ^     ^     ^

Looking at the output, clearly a pattern emerges. The high occurring duplets seem to located at specific places in the request. Zooming on the ones which reoccur in the different requests, we note that the majority are located at positions which are 6 (or a multiple thereof) position separated from each other.

Hextet xxyyzz

For the rest of the analysis we will indicate the hextet as “xxyyzz” to discuss its sub-parts.

splitting one request line in the different hextets (i.e. 6 consecutive hex chars), we get the following, where we see the repetition of the ‘xx’ part


‘xx’ part

Doing a frequency analysis on the ‘xx’ part we get (columns are: xx duplet, number of occurrence, decimal representation of xx duplet)

xx | #occ | decimal(xx)
04   77      4
09   77      9
16    5     22
19   12     25
1f   57     31
2e   17     46
3d   41     61
48   45     72
49   36     73
58   86     88
5d   66     93

The “decimal(xx)” part correlates directly with the variable part of the /block requests





‘yy’ part

Following the logic for the ‘xx’ part, the ‘yy’ part would be something in each of the different block files. A quick frequency analysis shows that all ‘yy’ duplets (decimal 1 – 255) occur roughly at the same frequency.

Looking at a /block file, we see it is made up of “key”:value pairs. Frequency analysis on the “key” entries (for the 312304?format=json) gives [annotations added]

main_chain      1
mrkl_root       1
addr_tag      222  [ free text form ]
addr_tag_link 222  [ URL ]
weight        289  [ 4-5 digit number ]
vin_sz        289  [ 1-2 digit number ]
lock_time     289  [ 0 ]
out           289  [ just a place holder ]
inputs        289  [ just a place holder ]
vout_sz       289  [ 2 ]
hash          290  [ 64 hex char hash value ] <span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
time          290  [ unix time stamp, all in the same range ]
ver           290  [ 1 ]
relayed_by    290  [ IP address ]
size          290  [ 3-4 digit number ]
prev_out      496  [ just a place holder ]
witness       497  [ empty]
sequence      497  [ the same digit for all: 4294967295 ]
value        1087
type         1087

As there seem to be at least 255 ‘yy’ duplets, we look at the ones annotated above. Off these “hash” looks the most promising (64 hex chars; can be used to hide lots of stuff).

We know that “1f620e” codes for “T” (first encoded character)

  • xx: 1f (dec 31) indicates the 312331.json file
  • yy: 62 (dec 98) probably somehow related to the “hash” lines
  • zz: 0e (dec 16unknown
  • the hexadecimal ascii code for “T” is “54”

Looking for this hex code (54) in 312331.json reveals there are 52 of the “hash” lines which are matching. When analysing the “how many-th” of these hash line matches, we see it also matches on the 98th line, which corresponds to the “yy” part.

 72 "hash":"b8634bb572ae8696c50a0654451c212f75866bbad9877948309c529f50bb5d4d",
 85 "hash":"10c42099c9908546bf5eb4db835be43487818d34e9334f20390c083b45418bcc",
 90 "hash":"2b3833205d2a5b2aa3d7950c822111afb791189abc74b54c6d6f12d7600f58f8",
 92 "hash":"39116dd5d36773fb8bffe3250f71782c1ba0bda9eb0302f542dfca8881f14592",
 98 "hash":"43cffe4060bf2e543ce6b60e5714ea7ff8ef60162c3615be9dd194054e3dbdca",
107 "hash":"4154ccce0964f0f4062c919377161d130a135d9495d981c7e875be94c8eef031",
109 "hash":"e75f7df1662ccd2d8501f0d911aa18d12d1cf1a2a1ef76bc16eb0b070caf1542",

Examining the 98th hash line in more detail and looking for “54”

             <span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span> ^^ (position 15-160

zz” has decimal value 16, so this let’s us assume that this is an array which starts at “0”.

Quickly checking this assumption we see that it also holds for the other hextets

09863c” codes for “o

checking our assumption

  •  xx: 09 (dec 9)
  •  yy: 86 (dec 134)
  •  zz: 3c (dec 60)
  •  “o”: ascii code “6f”

312309.json, looking for “6f”

131 "hash":"efd8e356f05dbc396ac6a9fe9ad143b61e7ab39b091d70f7e2126c708373ced4",
133 "hash":"0b83dde3c9bbdb06f438db9c503ee060bde941c4f1f6176f1c3b66c559d1d4f1",
134 "hash":"e12217881e2b80950a86bbc071b1800c65edcc11d4431cbcd1810a4afda46fbb",
149 "hash":"71b69de1368e73b66f1ccffd036440a422f6828db85c0c6d3ac7809eca86b31c",
155 "hash":"d4b3f72b3d84378be66fa9b9bf26ff8e2cc5efc8f4aece7290f6f52a65c15f2c",

and looking at the 134-th “hash” line we find “6f”

                                           (position 61-62) ^^

Now that we know how the encryption work we can look up all symbols we need to create the message for Flag 3

$ curl ‑A Satoshi:0.15.1


Base64 decoding the return_value we get the third flag

 Flag 3: CSCBE{jz2h478dfg^#%%$%jdfj}

[Another approach for finding the missing characters was brute force. Once the concept of the hextets was clear, one could iterate over the different yy and zz values. The webserver implemented rate limiting which hindered this approach, but made it not impossible]

About the author

Kris BoulezKris Boulez has extensive experience in Information Security. He joined NVISO in early 2017. The last decade he has mainly worked on Enterprise Security Architectures (ESA), PKI and (Web) Application Security. For ESA, he strongly believes in a business-driven approach (SABSA) and for human well-being in the healing power of coffee. You can find Kris on LinkedIn.


Painless Cuckoo Sandbox Installation

TLDR: As part of our SANS SEC599 development efforts, we updated (fixed + added some new features) an existing Cuckoo Auto Install script by Buguroo Security to automate Cuckoo sandbox installation (& VM import). Download it from our Github here.

As a blue team member, you often have a need to analyze a piece of malware yourself. For example, if you discover a malware sample in your network and suspect it might be part of a targeted attack. There’s a few options in this case: you could reverse the sample or do some static analysis, but maybe you want to get a “quick” insight by running it in a sandbox… It is often not desirable to submit your samples to public, online malware analysis services, as this might tip off your adversaries. So… How to proceed?

In the SANS training SEC599 that we’ve co-developed at NVISO (“Defeating Advanced Adversaries – Implementing Kill Chain Defenses”), we decided we wanted to show students how analysis can be performed using Cuckoo sandbox, a popular open source automated malware analysis system (We do love Cuckoo!).

After re-deploying manually Cuckoo a number of times in different environments, I (Erik speaking here) figured there must be a better way… After doing some Google’ing, I found the CuckooAutoinstall script created by Buguroo Security!

An excellent effort indeed, but it hadn’t been updated for a while, so we decided to update this script to include some additional features and enabled it to run with the latest version of Ubuntu (16.04 LTS) and Cuckoo (2.0.5). You can find it on our GitHub repository.

Preparing your sandbox
Before we do a walk-through of this script, let’s pause a moment to consider what it takes to set up a malware analysis environment. The type of analysis performed by Cuckoo can be classified as dynamic analysis: the malware sample is executed in a controlled environment (a Virtual Machine) and its behavior is observed. As most malware targets the Windows operating system and/or the applications running on it, you will need to create a Windows VM for Cuckoo.

You will need to step out of your role as a blue team member to prepare this VM: this VM has to be as vulnerable as possible! To increase the chances of malware executing inside the VM, you will have to disable most of the protections and hardening you would implement on machines in your corporate network. For example, you will not install an anti-virus in this VM, disable UAC, don’t install patches,…

To properly analyze malicious Office documents, you will use an older, unpatched version of Microsoft Office and you will disable macro security: you want the macros to run as soon as the document is opened, without user interaction.

Take your hardening policies and guidelines, and then …, do the opposite! It will be fun (they said…)!

Using CuckooAutoinstall
Installing Cuckoo with CuckooAutoinstall is easy: prepare your VM and export it to the OVA format, update the script with your VM settings, and execute it as root. We will discuss how you can create Cuckoo analysis VMs in a follow-up blogpost!

It’s best that your Ubuntu OS is up-to-date before you launch the script, otherwise you might encounter errors when CuckooAutoinstall will install the necessary dependencies, like this error on Ubuntu:


Updating the script with your VM settings is easy, these are the parameters you can change:





Then execute the script as root. It will take about 15 minutes to execute, depending on your Internet speed and size of your VM. If you are interested in seeing the progress of the script step by step, use option –verbose.


When the script finishes execution, you want to see only green check-marks:


Testing your Cuckoo installation
To start Cuckoo, you execute the cuckoo-start.sh script created by CuckooAutoinstall for you:



Then you can use a web browser to navigate to port 8000 on the machine where you installed Cuckoo:


Submit a sample, and let it run:



After a minute, you’ll be able to view the report. Make sure you do this, because if you get the following message, your guest VM is not properly configured for Cuckoo:


The best way to fix issues with your guest VM, is to log on with the cuckoo user (credentials can be found & modified in the CuckooAutoinstall script), start VirtualBox and launch your VM from the “clean” snapshot.



Once you have troubleshooted your VM (for example, fix network issues), you take a snapshot and name this snapshot “clean” (and remove the previous “clean” snapshot). Then try to submit again (for each analysis, Cuckoo will launch your VM from the “clean” snapshot).

This will give you a report without errors:


Although installing Cuckoo can be difficult, the CuckooAutoinstall script will automate the installation and make it easy, provided you configured your guest VM properly. We plan to address the configuration of your guest VM in an upcoming blog post, but for now you can use Cuckoo’s recommendations.

It is possible to install Cuckoo (with CuckooAutoinstall) on Ubuntu running in a virtual environment. We have done this with AWS and VMware. There can be some limitations though, depending on your environment. For example, it could be possible that you can not configure more than one CPU for your Cuckoo guest VM. As there are malware samples that try to evade detection by checking the number of CPUs, this could be an issue, and you would be better off using a physical Cuckoo install.

Want to learn more? Please do join us at one of the upcoming SEC599 SANS classes, which was co-authored by NVISO’s experts!

About the authors
Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.

Erik Van Buggenhout is a co-founder of NVISO with vast experience in penetration testing, red teaming and incident response. He is also a Certified Instructor for the SANS Institute, where he is the co-author of SEC599. You can find Erik on Twitter and LinkedIn.

Filtering out top 1 million domains from corporate network traffic

During network traffic analysis and malware investigations, we often use IP and domain reputation lists to quickly filter out traffic we can expect to be benign. This typically includes filtering out traffic related to the top X most popular websites world-wide.

For some detection mechanisms, this technique of filtering out popular traffic is not recommended – for example, for the detection of phishing e-mail addresses (which are often hosted on respected & popular e-mail hosting services such as Gmail). However for the detection of certain network attacks, this can be very effective at filtering out unnecessary noise (for example, the detection of custom Command & Control domains that impersonate legitimate services).

For this type of filtering, the Alexa top 1 million was often used in the past, however this free service was recently retired. A more recent & up to date alternative is the Majestic Million.

We ran some tests against a few million DNS requests in a corporate network environment, and measured the impact of filtering out the most popular domain names.

Top 1000.PNG

Progressively filtering out the Top 1000 most popular domains from network traffic. The Y-axis shows the % of traffic that matches the Top list for a given size.

When progressively filtering out more domains from the top 1000 of most popular domains, we notice a steep increase of matched traffic, especially for the top 500 domains. By just filtering out the top 100 most popular domains (including Facebook, Google, Instagram, Twitter, etc.) we already filter out 12% of our traffic. For the top 1000, we arrive at 24%.

We can also do the same exercise for the top 1 million of most popular domains, resulting in the graph below.

Top 1000000.PNG

Progressively filtering out the Top 1.000.000 most popular domains from network traffic. The Y-axis shows the % of traffic that matches the Top list for a given size.

When progressively filtering out all top 1 million domains from DNS traffic, we notice a progressive increase for the top 500.000 (with a noticeable jump around that mark for this particular case, caused by a few popular region-specific domains). Between the top 500.000 and the top 1.000.000, only an additional 11% of traffic is filtered out.

Based on the above charts, we see that the “sweet spot” in this case lies around 100.000, where we still filter out a significant amount of traffic (49%) while lowering the risk of filtering out too many domains which could be useful for our analysis. Finding this appropriate threshold greatly depends on the type of analysis you are doing, as well as the particular environment you are working in; however, we have seen similarly shaped charts for other corporate environments, too, with non-significant differences in terms of the observed trends.

As you can see, this is a highly effective strategy for filtering out possible less interesting traffic for investigations. Do note that there’s a disclaimer to be made: several of the 1Million top domains are expired and can thus be re-registered by adversaries and subsequently used for Command & Control! As always in cyber security, there’s no silver bullet but we hope you enjoyed our analysis :-).

About the author
Daan Raman is in charge of NVISO Labs, the research arm of NVISO. Together with the team, he drives initiatives around innovation to ensure we stay on top of our game; innovating the things we do, the technology we use and the way we work form an essential part of this. Daan doesn’t like to write about himself in third-person. You can contact him at draman@nviso.be or find Daan online on Twitter and LinkedIn.

Creating custom YARA rules

In a previous post, we created YARA rules to detect compromised CCleaner executables (YARA rules to detect compromised CCleaner executables). We will use this example as an opportunity to illustrate how the creation of these custom YARA rules was performed.

In its blog post, Talos shared 3 hashes as Indicators Of Compromise (IOCs):


Although not explicitly stated in the Talos report, these are SHA256 hashes. We can infer this from the length of the hexadecimal strings. A hexadecimal character represents 4 bits, hence  64 hexadecimal characters represent 256 bits.

Admittedly, YARA is not the best tool to scan files with hashes as IOCs. YARA is designed to use pattern matching to scan malware, and early versions of YARA did not even have the capability to use cryptographic hashes. In later versions, a hash module was introduced to provide this capability.

A simple rule to match files based on a SHA256 IOC can be implemented like this:

import "hash"</p>
<p style="text-align:justify;">rule simple_hash_rule {
hash.sha256(0, filesize) == "1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff"

Line 1 imports the hash module, which we need to calculate the SHA256 value of the content of a file.

The rule simple_hash_rule has just a condition (line 5): the SHA256 value of the content of the file is calculated with method call hash.sha256(0, filesize) and then compared to the IOC as a string. Remark that string comparison is case-sensitive, and that the cryptographic hash functions of the hash module return the hash values as lowercase hexadecimal strings. Hence, we have to provide the IOC hash as a lowercase string.

Using this simple rule to scan the C: drive of a Windows workstation in search of compromised CCleaner executables will take time: the YARA engine has to completely read each file on the disk to calculate its SHA256 value.

This can be optimized by writing a rule that will not hash files that can not possibly yield the hash we are looking for. One method to achieve this, is to first check the file size of a file and only calculate the SHA256 hash if the file size corresponds to the file size of the compromised executable.

This is what we did in this rule:

import "hash"</p>
<p style="text-align:justify;">rule ccleaner_compromised_installer {
filesize == 9791816 and hash.sha256(0, filesize) == "1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff"

In this rule, we first compare the filesize variable to 9791816 and then we calculate the hash. Because the and operator uses lazy evaluation, the right-side operand will only be evaluated if the left-side operator is true. With this, we achieve our goal: only if the filesize is right, will the SHA256 hash be calculated. This will speed up the scanning process by excluding files with the wrong file size.

There is one caveat though: Talos did not report the file sizes for the IOCs they published. However, we were able to obtain samples of the compromised CCleaner files, and that’s how we know the file sizes.

If you are not in a position to get a copy of the samples, you can still search the Internet for information on these samples like the file size. For example, this particular sample was analyzed by Hybrid Analysis, and the file size is mentioned in the report:

You can also find information for this sample on VirusTotal, but the analysis report does not mention the exact file size, just 9.34 MB.

This information can still be used to create a rule that is optimized for performance, by accepting a range of possible file size values, like this:

import "hash"</p>
<p style="text-align:justify;">rule filesize_and_hash {
filesize &amp;gt;= 9.33 * 1024 * 1024 and filesize &amp;lt;= 9.35 * 1024 * 1024 and hash.sha256(0, filesize) == "1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff"

This rule will first check if the file size is between 9.33MB and 9.35MB (1024 * 1024 is 1 MB) and if true, then calculate the hash.


Writing custom YARA rules is a skill that requires practice. If you want to become proficient at writing your own YARA rules, we recommend that you embrace all opportunities to write rules, and step by step make more complex rules.

About the author

Didier Stevens is a malware expert working for NVISO. Didier is a SANS Internet Storm Center handler and Microsoft MVP, and has developed numerous popular tools to assist with malware analysis. You can find Didier on Twitter and LinkedIn.


“We are happy to share our knowledge with the community and thus hope you enjoyed the content provided on this article! If you’re interested in receiving dedicated support to help you improve your cyber security capabilities, please don’t hesitate to have a look at the services NVISO offers! We can help you define a cyber security strategy / roadmap, but can also offer highly technical services such as security assessments, security monitoring, threat hunting & incident response (https://www.nviso.be). Want to learn similar techniques? Have a look at the SEC599 SANS class, which was co-authored by NVISO’s experts! (https://www.sans.org/course/defeating-advanced-adversaries-kill-chain-defenses)”

How CSCBE’s “Modbusted” challenge came to be

About the CSCBE

The Cyber Security Challenge Belgium (CSCBE) is a typical Capture-The-Flag (CTF) competition aimed at students from universities and colleges all over Belgium. All of the CSCBE’s challenges are created by security professionals from many different organisations.  The “Modbusted” challenge was created by Jonas B, one of NVISO’s employees.

First, some statistics about the Modbusted challenge during this year’s CSCBE qualifiers. The challenge was rated as quite easy, with a score of 40 points. It was solved by 43 teams, with the first solve by team “Riskture”, about 1 hour and 6 minutes into the CSCBE 2018. Unsurprisingly, all 8 teams that made it to the finals were able to solve it.

Before diving into the details of how it was created, we’ll give you the chance to try out the challenge yourself:

Oh no, we’ve discovered some weird behaviour in a crucial part of our nuclear power plant! In order to investigate the problem, we attached some monitoring equipment to our devices. Could you investigate the captured traffic? Hopefully we’re not compromised …

You can download the PCAP file here: CSCBE_Modbusted

After you’ve solved it, or if you just want to see how the PCAP was created, read on below!

Intro to Modbusted

Another fitting title for this blog post would have been “PCAP manipulation 101”. For this challenge, we didn’t want to generate a PCAP from scratch, since that would probably have required multiple VMs to simulate machines and additional effort to generate some realistically-looking traffic. That’s why we went for an existing PCAP containing sample Modbus traffic, more precisely DigitalBond’s Modbus test data part 1.

This traffic will serve as the legitimate traffic. In the PCAP, we can discern some master devices (for example a human-machine interface), with IPs ending in .57 and .9 that are querying slave devices with IPs .8 and .3 (for example PLCs) using Modbus TCP.

Generating the data exfiltration traffic

The malicious traffic will consist of some kind of encoded data exfiltration from an infected master system towards a fake slave device, which simulates a physical device attached to the network that further communicates with the adversary using a 4G connection. So for this part, two virtual machines were in fact needed to generate the traffic.

For the slave machine simulation, there is a Java application called “ModbusPal” that is able to emulate a Modbus slave. To communicate with the slave, the master can use “mbtget”, a command line Modbus client written in Perl. For the data exfiltration part, we wrote a short script to convert the flag letter per letter into its ASCII code and send that code as Modbus’ unit identifier.


for i in $(seq 1 ${#word}); do 
code=$(echo $(printf "%d" "'${word:i-1:1}")); 
mbtget -r1 -u $code -n 1

The unit identifier specifies the slave address, but this is often not used, since the slave is usually identified by its IP address. Only in case multiple devices are connected to a single IP, will there be different unit identifiers. As a result, communication containing a lot of different unit identifiers should arouse suspicion.

Will it blend?

Okay, we have simulated data exfiltration using the Modbus TCP protocol. However, the traffic in our malicious PCAP looks nowhere near like the traffic in the original PCAP. So, can we make it blend in?

First of all, while running the script and capturing the Modbus exfiltration traffic, there is also some undesired noise generated by the virtual machines. To get rid of this, we can use a tool called “editcap”, which allows us to delete certain entries or split the PCAP. For example, to delete the first 2 ARP entries in the image below, we run the command below:



editcap Malicious_traffic1.pcap Malicious_traffic2.pcap 1 2

Next up is changing the source and destination addresses. We can do this using the “tcprewrite” tool. However, unless you only have unidirectional traffic, we’ll need some extra processing to specify the bidirectional traffic flows. That’s where “tcpprep” comes into play.

Using tcpprep, it’s possible to determine the communication flows in the PCAP and thus mark servers and clients to correctly substitute information. Here, we use the port mode, which is applied when you want to use the source and destination port of TCP/UDP packets to classify hosts. So, by running the following command, we can create a cache file to be used with the tcprewrite command.

tcpprep --port Malicious_traffic2.pcap --cachefile=csc.cache

You can view the information stored in the cachefile by using:

tcpprep --print-info csc.cache

This command also tells us that it marked traffic from the master as “primary” and traffic from the slave as “secondary”. Keeping this in mind, we’ll use the following command to rewrite the layer 3 address information and assign the correct IP addresses. As a result, the master gets the same IP as in the legitimate traffic PCAP, and the slave gets another IP in the same subnet:

tcprewrite --endpoints= --cachefile=csc.cache --infile=Malicious_traffic2.pcap --outfile=Malicious_traffic3.pcap

It’s nice that we managed to simulate the layer 3 addresses, but one look at the MAC addresses and you immediately notice the malicious traffic. So, for that reason we also need to change the layer 2 addressing info:

tcprewrite --enet-smac=00:20:78:00:62:0d,00:90:E8:1A:C3:37 --enet-dmac=00:90:E8:1A:C3:37, 00:20:78:00:62:0d --cachefile=csc.cache --infile=Malicious_traffic3.pcap --outfile=Malicious_traffic4.pcap

For the master, we obviously chose the same MAC address as the original traffic (00:20:78:00:62:0d). To make the slave less suspicious, we couldn’t keep it at its current value, giving away the VM environment. Instead of picking a random value, we went for a MAC address belonging to MOXA (00:90:E8:…), an industrial device provider. With these actions completed, our data exfiltration traffic blends in with the legitimate traffic.

Traffic merge

Now we have two PCAPs, one with legitimate traffic, and one with malicous traffic. To finish things up, we have to merge them into one file. Thanks to “tcpreplay“, we can retransmit a PCAP’s traffic over an interface, ànd specify the sending rate.

Since the original PCAP has traffic spread over a period of nearly half an hour, we’ll speed things up a little and set the sending rate to 2 packets per second. For the malicious traffic, we have a lot more packets, so to fit the same time frame, we need to set the rate to 5 packets per second.


Running both of the tcpreplay commands with Wireshark listening on the interface produces the PCAP that we want!

Jonas is a senior security consultant at NVISO, where his main focus is infrastructure assessments. Next to defensive tasks, such as architecture reviews, he also likes to get offensive and perform network pentesting and red teaming. You can find Jonas on LinkedIn.

“We are happy to share our knowledge with the community and thus hope you enjoyed the content provided on this article! If you’re interested in receiving dedicated support to help you improve your cyber security capabilities, please don’t hesitate to have a look at the services NVISO offers! We can help you define a cyber security strategy / roadmap, but can also offer highly technical services such as security assessments, security monitoring, threat hunting & incident response (https://www.nviso.be). Want to learn similar techniques? Have a look at the SEC599 SANS class, which was co-authored by NVISO’s experts! (https://www.sans.org/course/defeating-advanced-adversaries-kill-chain-defenses)”

Intercepting Belgian eID (PKCS#11) traffic with Burp Suite on OS X / Kali / Windows

TL;DR: You can configure Burp to use your PKCS#11 (or Belgian eID) card to set up client-authenticated SSL sessions, which you can then intercept and modify.

This blog post shows how you can easily view and modify your own, local traffic.  In order to complete this tutorial, you still need a valid eID card, and the corresponding PIN code. This article does not mean that there is a vulnerability in PKCS#11 or Belgian eID, it only shows that it is possible for research purposes.

We sometimes come across web applications that are only accessible if you have a valid Belgian eID card, which contains your public/private key. Of course, we still need to intercept the traffic with Burp, so we have to do some setting up. Because I always forget the proper way to do it, I decided to write it down and share it with the internets.

This guide was written for OS X, but I will add information for Windows & Kali too where it differs from OS X. This guide also assumes that you’ve correctly installed your eID card already, which is a whole challenge in itself…

The target

The Belgian Government has created a test page, which can be found here: http://www.test.eid.belgium.be/ . Once you click on the “Start Test”, you will be taken to the HTTPS version, which requires your client certificate. After entering your PIN code, you can view the result of the test:

Screen Shot 2018-01-15 at 13.15.41

The first thing to do is set your browser to use Burp as your proxy. This can easily be done by using FoxyProxy (FireFox)/SwitchyOmega (Chrome) or your system-wide proxy (shiver).

After setting up Burp, the site will not be able load your client certificate:

Screen Shot 2018-01-15 at 13.41.47

Configuring Burp

Setting up Burp is not too difficult. The main challenge is finding out the location of the PKCS#11 library file. I’ve listed the locations for OS X, Kali and Windows below.

  1. Insert your eID card into the reader and restart Burp
  2. Go to Project settings -> SSL Tab -> Override user options and click ‘Add’
  3. Host: <empty> (or be specific, if you want)
    Certificate type: Hardware token (PKCS#11)
    Screen Shot 2018-01-15 at 12.43.25
  4. Supply the path of the correct library. Here are some locations:
    1. OS X
      (simlinked to /usr/local/lib/libbeidpkcs11.4.3.5.dylib in my case. There might be multiple versions there, if one doesn’t work, try another)
    2. Windows
      C:\Windows\System32\beid_ff_pkcs11.dll or
      or use the auto search function
    3. Kali
      (see “building from source” below)
  5. If you get “Unable to load library – check file is correct and device is installed”, restart Burp with your eID already plugged in and working in the eID viewer.
  6. Enter your PIN, click refresh and select the Authentication certificate:
    Screen Shot 2018-01-15 at 12.44.15
  7. Optionally, restart FireFox / Burp (this fixed an issue once)

Et voila, we can now use Burp to intercept the traffic:

Screen Shot 2018-01-15 at 13.01.39

Building from source on Kali (Rolling, 2017.3)

Installing the default debian pacakge doesn’t seem to work, so we’ll have to compile from source. You can do this by following the steps listed below.

Note that we’ve had Kali VMs crash on boot after installing the dependencies below and compiling the source. An easy solution is to take a snapshot, compile, and copy over the .so, or use the one we compiled (libbeidpkcs11.so).

> wget https://eid.belgium.be/sites/default/files/software/eid-mw-4.3.3-v4.3.3.tar.gz
> tar -xvf eid-mw-4.3.3-v4.3.3.tar.gz
> cd eid-mw-4.3.3-v4.3.3
> apt-get install pkg-config libpcsclite-dev libgtk-3-dev libxml2-dev libproxy-dev libssl-dev libcurl4-gnutls-dev
> ./configure && make && make install

After these steps, you can find the .so library at /usr/local/lib/libbeidpkcs11.so.

About the author

Jeroen Beckers is a mobile security expert working in the NVISO Cyber Resilience team. He also loves to program, both on high and low level stuff, and deep diving into the Android internals doesn’t scare him. You can find Jeroen on LinkedIn.


“We are happy to share our knowledge with the community and thus hope you enjoyed the content provided on this article! If you’re interested in receiving dedicated support to help you improve your cyber security capabilities, please don’t hesitate to have a look at the services NVISO offers! We can help you define a cyber security strategy / roadmap, but can also offer highly technical services such as security assessments, security monitoring, threat hunting & incident response (https://www.nviso.be). Want to learn similar techniques? Have a look at the SEC599 SANS class, which was co-authored by NVISO’s experts! (https://www.sans.org/course/defeating-advanced-adversaries-kill-chain-defenses)”

Going beyond Wireshark: experiments in visualising network traffic

At NVISO Labs, we are constantly trying to find better ways of understanding the data our analysts are looking at. This ranges from our SOC analysts looking at millions of collected data points per day all the way to the malware analyst tearing apart a malware sample and trying to make sense of its behaviour.

In this context, our work often involves investigating raw network traffic logs. Analysing these often takes a lot of time: a 1MB network traffic capture (PCAP) can easily contain several hundred different packets (and most are much larger!). Going through these network events in traditional tools such as Wireshark is extremely valuable; however they are not always the best to quickly understand from a higher level what is actually going on.

In this blog post we want to perform a series of experiments to try and improve our understanding of captured network traffic as intuitively as possible, by exclusively using interactive visualisations.

Screen Shot 2018-02-15 at 00.07.26.png

A screen we are all familiar with – our beloved Wireshark! Unmatched capabilities to analyse even the most exotic protocols, but scrolling & filtering through events can be daunting if we want to quickly understand what is actually happening inside the PCAP. In the screenshot a 15Kb sample containing 112 packets.

For this blog post we will use this simple 112 packet PCAP to experiment with novel ways of visualising and understanding our network data. Let’s go!

Experiment 1 – Visualising network traffic using graph nodes
As a first step, we simply represent all IP packets in our PCAP as unconnected graph nodes. Each dot in the visualisation represents the source of a packet. A packet being sent from source A to destination B is visualised as the dot visually traveling from A to B. This simple principle is highlighted below. For our experiments, the time dimension is normalised: each packet traveling from A to B is visualised in the order they took place, but we don’t distinguish the duration between packets for now.


IP traffic illustrated as interactive nodes.

This visualisation already allows us to quickly see and understand a few things:

  • We quickly see which IP addresses are most actively communicating with each other ( and
  • It’s quickly visible which hosts account for the “bulk” of the traffic
  • We see how interactions between systems change as time moves on.

A few shortcomings of this first experiment include:

  • We have no clue on the actual content of the network communication (is this DNS? HTTP? Something else?)
  • IP addresses are made to be processed by a computer, not by a human; adding additional context to make them easier to classify by a human analyst would definitely help.

Experiment 2 – Adding context to IP address information
By using basic information to enrich the nodes in our graph, we can aggregate all LAN traffic into a single node. This lets us quickly see which external systems our LAN hosts are communicating with:


Tagging the LAN segment clearly shows which packets leaves the network.

By doing this, we have improved our understanding of the data in a few ways:

  • We can very quickly see which traffic is leaving our LAN, and which external (internet facing) systems are involved in the communication.
  • All the internal LAN traffic is now represented as 1 node in our graph; this can be interesting in case our analyst wants to quickly check which network segments are involved in the communication.

However, we still face a few shortcomings:

  • We still don’t really have a clue on the actual content of the network communication (is this DNS? HTTP? Something else?)
  • We don’t know much about the external systems that are being contacted.

Experiment 3 – Isolating specific types of traffic
By applying simple visual filters to our simulation (just like we do in tools like Wireshark), we can make a selection of the packets we want to investigate. The benefit of doing this is that we can easily focus on the type of traffic we want to investigate without being burdened with things we don’t care about at that point in time of the analysis.

In the example below, we have isolated DNS traffic; we quickly see that the PCAP contains communication between hosts in our LAN (remember, the LAN dot now represents traffic from multiple hosts!) and 2 different DNS servers.


When isolating DNS traffic in our graph, we clearly see communication with a non-corporate DNS server.

Once we notice that the rogue DNS server is being contacted by a LAN host, we can change our visualisation to see which domain name is being queried by which server.We also conveniently attached the context tag “Suspicious DNS server” to host (the result of our previous experiment). The result is illustrated below. It also shows that we are not only limited by showing relations between IP addresses; we can for example illustrate the link between the DNS server and the hosts they query.


We clearly see the suspicious DNS server making request to 2 different domain names. For each request made by the suspicious DNS server, we see an interaction from a host in the LAN.

Even with larger network captures, we can use this technique to quickly visualise connectivity to suspicious systems. In the example below, we can quickly see that the bulk of all DNS traffic is being sent to the trusted corporate DNS server, whereas a single hosts is interacting with the suspicious DNS server we identified before.


So what’s next?
Nothing keeps us from entirely changing the relation between two nodes; typically, we are used to visualising packets as going from IP address A to IP address B; however, more exotic visualisations as possible (think about the relations between user-agents and domains, query lengths and DNS requests, etc.); in addition, there is plenty of opportunity to add more context to our graph nodes (link an IP address to a geographical location, correlate domain names with Indicators of Compromise, use whitelists and blacklists to more quickly distinguish baseline vs. other traffic, etc.). These are topics we want to further explore.

Going forward, we plan on continuing to share our experiments and insights with the community around this topic! Depending on where this takes us, we plan on releasing part of a more complete toolset to the community, too.


Making our lab rats happy, 1 piece of cheese at a time!

Squeesh out! 🐀

About the author
Daan Raman is in charge of NVISO Labs, the research arm of NVISO. Together with the team, he drives initiatives around innovation to ensure we stay on top of our game; innovating the things we do, the technology we use and the way we work form an essential part of this. Daan doesn’t like to write about himself in third-person. You can contact him at draman@nviso.be or find Daan online on Twitter and LinkedIn.