KRACKing WPA2

A new vulnerability in the WPA2 protocol was discovered by Mathy Vanhoef (researcher at KU Leuven) and published yesterday. The vulnerability – dubbed  “KRACK” – enables an attacker to intercept WPA2 encrypted network traffic between a client device (e.g. mobile or laptop) and a router. Depending on the network configuration it is even possible for an attacker to alter or inject data. Since the vulnerability lies in the WPA2 protocol, most platforms are susceptible to the attack. However, the impact is even higher on Android 6.0+ and Linux devices due to their specific implementation of WPA2.

The original publication can be found at https://www.krackattacks.com, with full details, a demonstration of the attack, and recommendations.

How does it work?

The KRACK vulnerability can be exploited by using Key Reinstallation AttaCKs. When a device wants to connect to a wireless access point, the WPA2 protocol will use a 4-way handshake. Part of this handshake consists of an acknowledgment that is sent from the client device to the wireless access point to ensure the client has successfully received the signed key. However, until the wireless access point has received the acknowledgment, it will continuously retransmit the signed key. If the attacker is able to block the acknowledgement from being sent to the wireless access point, the attacker is able to record and replay the signed key, which ensures the reinstallation of an already used key on the client’s device. As soon as the key is installed, it will be used to encrypt the network traffic between the client and the wireless access point. Since the attacker knows the keystream, he is able to decrypt the network traffic.

Since this attack exploits a vulnerability in the four-way handshake, all WPA versions are vulnerable.

Are all devices vulnerable?

Since this vulnerability is found in the WPA2 protocol, all platforms, such as Windows, Linux, MacOs, iOs, OpenBSD and Android can be targeted. Linux and Android 6.0+ devices are especially vulnerable, because the WiFi client will clear the encryption key from memory after installation. If the Android device is told to reinstall they key, the cleared out key will be installed, which is actually an all-zero key. This makes it trivial for an attacker to intercept and inject or alter data sent by these devices.

Does this mean that we need to stop using WPA2?

Although an attacker is able to intercept traffic and in some cases traffic can be altered or injected, this attack can only be performed when the attacker is in close proximity of the client’s device and wireless access point.

If the attacker has successfully exploited this vulnerability, traffic over TLS, such as HTTPS or VPN traffic, will not be accessible to the attacker.

What should we do? 

When this vulnerability was discovered, it was first disclosed to various vendors. Many vendors (e.g. Microsoft, Cisco) have already released patches for this issue. Install the available patches on all your devices or contact your vendors to see if a patch is available. A list with all patches per vendor can be found on https://www.kb.cert.org/vuls/byvendor?searchview&Query=FIELD+Reference=228519.

Furthermore, it is strongly advised to only use encrypted connections, such as HTTPS or a VPN connection, when sensitive content is transmitted over WiFi networks.

Additionally, watch out for rogue access point in your surroundings, office buildings.

More information? 

NVISO analysts are still working on additional research and will update this blogpost with any results.

Should you require additional support, please don’t hesitate to contact our 24/7 hotline on +32 (0) 588 43 80 or csirt@nviso.be.

If you are interested in receiving our advisories via our mailing list, you can subscribe by sending us an e-mail at csirt@nviso.be.

YARA DDE rules: DDE Command Execution observed in-the-wild

The MS Office DDE YARA rules that we published yesterday detected several malicious documents samples since 10/10/2017.

Remark: the malicious samples we mention were detected with our DDEAUTO rule (Office_DDEAUTO_field); as we feared, the second rule (Office_DDE_field) is generating some false positives and we will update it.

The first sample uses PowerShell to download an executable and run it. With zipdump.py and our YARA rules we can extract the command, and with sed command “s/<[^>]*>//g” we can remove the XML tags to reveal the command:

The second sample is using PowerShell with a second stage DLL (we were not able to recover the DLL):

As could be expected, we also observed many samples that are not truly malicious, but just the samples of persons experimenting with DDE code execution starting 10/10/2017. This could also be the case for the “DLL sample”.

Finally, we observed a malicious campaign spoofing emails from the U.S. Securities and Exchange Commission (SEC)’s EDGAR filing system with .docx attachments abusing DDE code execution.

This campaign used compromised government servers to serve a PowerShell second stage script:

Leveraging compromised government servers increases the success of such campaigns, because of the implied trust associated with government servers.

Talos published an extensive analysis for this campaign. We observed the same samples, and also a sample (email attachment) from the same campaign that uses pastebin to host the second stage:

Should you have to analyze the next stages, know that they are PowerShell scripts that have been compressed and BASE64 encoded. Here is one method to extract these scripts:

New IOCs:

  • bf38288956449bb120bae525b6632f0294d25593da8938bbe79849d6defed5cb
  • 316f0552684bd09310fc8a004991c9b7ac200fb2a9a0d34e59b8bbd30b6dc8ea

Detecting DDE in MS Office documents

Dynamic Data Exchange is an old Microsoft technology that can be (ab)used to execute code from within MS Office documents. Etienne Stalmans and Saif El-Sherei from Sensepost published a blog post in which they describe how to weaponize MS Office documents.

We wrote 2 YARA rules to detect this in Office Open XML files (like .docx):

Update 1: our YARA rules detected several malicious documents in-the-wild.

Update 2: we added rules for OLE files (like .doc) and updated our OOXML rules based on your feedback.

// YARA rules Office DDE
// NVISO 2017/10/10 - 2017/10/12
// https://sensepost.com/blog/2017/macro-less-code-exec-in-msword/
 
rule Office_DDEAUTO_field {
  strings:
    $a = /<w:fldChar\s+?w:fldCharType="begin"\/>.+?\b[Dd][Dd][Ee][Aa][Uu][Tt][Oo]\b.+?<w:fldChar\s+?w:fldCharType="end"\/>/
  condition:
    $a
}
 
rule Office_DDE_field {
  strings:
    $a = /<w:fldChar\s+?w:fldCharType="begin"\/>.+?\b[Dd][Dd][Ee]\b.+?<w:fldChar\s+?w:fldCharType="end"\/>/
  condition:
    $a
}

rule Office_OLE_DDEAUTO {
  strings:
    $a = /\x13\s*DDEAUTO\b[^\x14]+/ nocase
  condition:
    uint32be(0) == 0xD0CF11E0 and $a
}

rule Office_OLE_DDE {
  strings:
    $a = /\x13\s*DDE\b[^\x14]+/ nocase
  condition:
    uint32be(0) == 0xD0CF11E0 and $a
}

These rules can be used in combination with a tool like zipdump.py to scan XML files inside the ZIP container with the YARA engine:

The detection is based on regular expressions designed to detect fields containing the word DDEAUTO or DDE. By dumping the detected YARA strings with option –yarastringsraw, one can view the actual command:

Here is an example of the DDE rule firing:

You can also look for MS Office files containing DDE using this YARA rule in combination with ClamAV as described in this blog post.

YARA rules for CCleaner 5.33

First reported by Talos and Morphisec, the compromise of CCleaner version 5.33 is still making news.

At NVISO Labs, we created YARA detection rules as soon as the news broke, and distributed these rules to our clients subscribed to our NVISO Security Advisories. In a later blog post, we will explain in detail how to create such YARA rules, so that you can do the same for your organization.

Here are the YARA rules we created:

// YARA rules compromised CCleaner
// NVISO 2017/09/18
// http://blog.talosintelligence.com/2017/09/avast-distributes-malware.html

import "hash"

rule ccleaner_compromised_installer {
	condition:
		filesize == 9791816 and hash.sha256(0, filesize) == "1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff"
}

rule ccleaner_compromised_application {
	condition:
		filesize == 7781592 and hash.sha256(0, filesize) == "36b36ee9515e0a60629d2c722b006b33e543dce1c8c2611053e0651a0bfdb2e9" or
		filesize == 7680216 and hash.sha256(0, filesize) == "6f7840c77f99049d788155c1351e1560b62b8ad18ad0e9adda8218b9f432f0a9"
}

rule ccleaner_compromised_pdb {
	strings:
		$a = "s:\\workspace\\ccleaner\\branches\\v5.33\\bin\\CCleaner\\Release\\CCleaner.pdb" 
		$b = "s:\\workspace\\ccleaner\\branches\\v5.33\\bin\\CCleaner\\ReleaseTV\\CCleaner.pdb" 
	condition:
		uint16(0) == 0x5A4D and uint32(uint32(0x3C)) == 0x00004550 and ($a or $b)
}

You can scan the C: drive of a computer with YARA like this:

yara64.exe -r ccleaner.yara C:\

And there are also many other scanning tools that include the YARA engine, like ClamAV.

Hunting

The first 2 rules we created are hash based, but the third rule (ccleaner_compromised_pdb) is based on a particular string found in CCleaner’s 32-bit executables. This string is the full path of the Program Database (PDB) file, a debug file created by default by Visual Studio and referenced in compiled executables.

With this rule, we were able to identify 235 files on VirusTotal. Most of these are actually container files (like ZIP files): CCleaner is a popular application, and is distributed through various channels other than Piriform’s website. We saw examples of portable application packages distributing this compromised version of CCleaner (like LiberKey) and also RAR files with pirated versions of CCleaner.

23 files were actual executables, and were all compromised versions of the 32-bit executable of CCleaner version 5.33, except one. Most of these files did not have a (valid) signature: they were modified versions, e.g. cracked and infected with other malware.

Only one executable detected by our ccleaner_compromised_pdb rule was not infected: an executable with SHA256 hash c48b9db429e5f0284481b4611bb5b69fb6d5f9ce0d23dcc4e4bf63d97b883fb2. It turns out to be a 32-bit executable of CCleaner version 5.33, digitally signed on 14/09/2017, e.g. after Talos informed Avast/Piriform. The build number was increased with one (6163 instead of 6162). This executable was signed with the same certificate that was used for the compromised version 5.33 (thumbprint f4bda9efa31ef4a8fa3b6bb0be13862d7b8ed9b0), and also for follow-up version 5.34. Yesterday (20/9/2017), Piriform finally released a new version (5.35) signed with a new digital certificate obtained yesterday.

At this moment, we are uncertain about the origins and purpose of this particular executable (c48b9db429e5f0284481b4611bb5b69fb6d5f9ce0d23dcc4e4bf63d97b883fb2).

Are you infected?

Our rules allow you to detect compromised CCleaner executables in your environment, but this does not imply that the machines identified by these rules were infected.

Our analysis shows that the compromised CCleaner installer (version 5.33) will install 32-bit and 64-bit versions of the CCleaner executables on a Windows 64-bit machine, and only the 32-bit version on a Windows 32-bit machine.

The shortcuts (like Start and Desktop shortcuts) deployed during the install on Windows 64-bit machines will point to the 64-bit executable, hence normal usage on a Windows 64-bit machine will execute 64-bit CCleaner.

Only the 32-bit executable of CCleaner is compromised.

It is therefore perfectly possible that compromised 32-bit executables of CCleaner are detected on Windows 64-bit machines with the YARA rules we provided, but that this compromised version was never executed.

If the compromised 32-bit executable runs successfully, it will create the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Piriform\Agomo

Two values will be found for this key: MUID and TCID

Conclusion

We recommend that you check for the presence of this registry key, should our YARA rules detect compromised CCleaner installations on your machines.

Compromised machines should be reinstalled after a DFIR investigation.

Active exploitation of Struts vulnerability S2-052 CVE-2017-9805

Yesterday night (06 September 2017 UTC) we observed active exploitation of Struts vulnerability S2-052 CVE-2017-9805 (announced a day earlier).

Here is the request we observed:

The POST request to /struts2-rest-showcase/orders/3 allowed us initially to detect this attempt.

The packet capture shows that this is a full exploit attempt for reconnaissance purposes: the payload is a /bin/sh command to execute a silent wget command to a compromised Russian website (it includes the name of the scanned site as query). The downloaded content is discarded.

The XML data used in this exploit attempt is slightly different from the Metasploit module for CVE-2017-9805: XML element ibuffer is represented as <ibuffer/> in this exploit attempt, while it is represented as <ibuffer></ibuffer> in the proposed Metasploit module. Both forms are functionally equivalent.

We did find an older version of this script that is used in this attack here. This shows how fast attackers attempt to abuse newly discovered vulnerabilities using (potentially unverified) exploits.

The source of this request is the same compromised Russian website, it is the first time we observe exploit attempts for CVE-2017-9805. We saw no other requests from this source since this attempt.

We would recommend everyone to keep an eye out for this type of behavior in their web server logs. As always, we won’t hesitate to share any additional observations we make. Should you observe suspicious behavior and would like to receive additional support, please don’t hesitate to get in touch with our experts!

 

Decoding malware via simple statistical analysis

Intro

Analyzing malware often requires code reverse engineering which can scare people away from malware analysis.

Executables are often encoded to avoid detection. For example, many malicious Word documents have an embedded executable payload that is base64 encoded (or some other encoding). To understand the encoding, and be able to decode the payload for further analysis, reversing of the macro code is often performed.

But code reversing is not the only possible solution. Here we will describe a statistical analysis method that can be applied to certain malware families, such as the Hancitor malicious documents. We will present this method step by step.

Examples

First we start with a Windows executable (PE file) that is BASE64 encoded. In BASE64 encoding, 64 different characters are used to encode bytes. 64 is 6 bits, hence there is an overhead when encoding in BASE64, as encoding one byte (8 bits) will require 2 BASE64 characters (6 bits + 2 bits).

With byte-stats.py, we can generate statistics for the different byte values found in a file. When we use this to analyze our BASE64 encoded executable, we get this output:

20170818-121549

In the screenshot above see that we have 64 different byte values, and that 100% of the byte values are BASE64 characters. This is a strong indication that the data in file base64.txt is indeed BASE64 encoded.

Using the option -r of byte-stats.py, we are presented with an overview of the ranges of byte values found in the file:

20170818-121603

The identified ranges /0123456789, ABCDEFGHIJKLMNOPQRSTUVWXYZ and abcdefghijklmnopqrstuvwxyz (and single charcter +) confirm that this is indeed BASE64 data. Padded BASE64 data would include one or two padding characters at the end (the padding character is =).

Decoding this file with base64dump.py (a BASE64 decoding tool), confirms that it is a PE file (cfr. MZ header) that is BASE64 encoded.

20170818-121639

Now, sometimes the encoding is a bit more complex than just BASE64 encoding.

Let’s take a look at another sample:

20170818-134020.png

The range of lowercase letters, for example, starts with d (in stead of a) and ends with } (in stead of z). We observer a similar change for the other ranges.

It looks like all BASE64 characters have been shifted 3 positions to the right.

We can test this hypothesis by subtracting 3 from every byte value (that’s shifting 3 positions to the left) and analyzing the result. To subtract 3 from every byte, we use program translate.py. translate.py takes a file as input and an arithmetic operation: operation “byte – 3” will subtract 3 from every byte value.

This is the result we get when we perform a statistical analysis of the byte values shifted 3 positions to the left:

20170818-140557

In the screenshot above we see 64 unique bytes and all bytes are BASE64 characters. When we try to decode this with base64dump, we can indeed recover the executable:

20170818-141640

Let’s move on to another example. Malicious documents that deliver Hancitor malware use an encoding that is a bit more complex:

20170818-141220

This time, we have 68 unique byte values, and the ranges are shifted by 3 positions when we look at the left of a range, but they appear to be shifted by 4 positions when we look at the right of a range.

How can this be explained?

One hypothesis, is that the malware is encoded by shifting some of the bytes with 3 positions, and the other bytes with 4 positions. A simple method is to alternate this shift: the first byte is shifted by 3 positions, the second by 4 positions, the third again by 3 positions, the fourth by 4 positions, and so on …

Let’s try out this hypothesis, by using translate.py to shift by 3 or 4 positions depending on the position:

20170818-142338

Variable position is an integer that gives the position of the byte (starts with 0), and position % 2 is the remainder of dividing position by 2. Expression position % 2 == 0 is True for even positions, and False for uneven positions. IFF is the IF Function: if argument 1 is true, it returns argument 2, otherwise it returns argument 3. This is how we can shift our input alternating with 3 and 4.

But as you can see, the result is certainly not BASE64, so our hypothesis is wrong.

Let’s try with shifting by 4 and 3 (instead of 3 and 4):

20170818-142729

This time we get the ranges for BASE64.

Testing with base64dump.py confirms our hypothesis:

20170818-142903

Conclusion

Malware authors use encoding schemes that can be reverse engineered by statistical analysis and testing simple hypotheses. Sometimes a bit of trial and error is needed, but these encoding schemes can be simple enough to decode without having to perform reverse engineering of code.

Don’t be lazy with P4ssw0rd$

Three challenges to making passwords user-friendly

Following the interview of Bill Burr, author of NIST’s 2003 paper on Electronic Authentication, in which he announced that he regrets much of what he wrote, we stop and think.

Why was the standard putting users at risk? Paraphrasing History: “Tout pour le peuple; rien par le peuple”. Perfectly correct from a theoretical point of view, the standard failed to acknowledge that users are indeed people, and when asked to follow too complex rules they will find “tricks” to help themselves to remember their current nightmarish password. Of course, said tricks are fairly easy to guess by any decent hacker, let alone an educated computer.

Nothing new here, the user is often and unfairly considered as the problem. But since there is no easy way to fix the user, it is up to us, as security and IT professionals, to design and build our systems to make them more resilient to human mistakes, and maybe some laziness.

Screen Shot 2017-08-28 at 09.36.42

Difficult for you, easy for a computer : passwords haven’t been what they should.

 

Ah, those funny stories on predictable passwords

The problem with the previous standard wasn’t that it was advising people to make easy to crack passwords, but that too complex rules steered users towards the path of least resistance: very complex and very predictable passwords.

I remember working in a team where, by knowing how long one of your colleagues had been around, you could easily guess their password, applying the simple rule of Company_nn, where nn was the number of the rotation of the password.

So, what now ?
Three challenges to making passwords user friendly

The new NIST 800-63 special publication, and previous publications such as GCHQ’s NSCS guidance, turns the approach upside down: make your password policy user friendly and you’ll get better security. A simple idea: put the burden as much as possible on the verifier, not the user. With one dream: create security that works no matter what the people do. Is it all that easy ? Let’s look at three recurrent challenges we’ve encountered at our clients:

1. Make it hard to guess with blacklist check

What is this about?
Forget complexity and just make sure you don’t use a word from the dictionary, a known first or last name, or a commonly used password (based on public lists of breached passwords). Now, this is easier than done.

Why is it a challenge?
While quality password blacklists can be found online, neither the blacklist validation mechanism nor the integration with frequently updated blacklists is proposed in most systems and applications on the market. Azure AD, for example, has offered this functionality for only a year, and its scope remains limited. And then, most organizations use a local AD. Or something else that doesn’t have such a native password validation check.

So what ?
There are workarounds of course, but they’re not always robust and imply manual maintenance of a blacklist – an effort many organizations are reluctant to commit to. It will be interesting to see how the market catches up on this one. Until then, well, most system admin prefer to keep some complexity requirements on.

2. Make it easy to remember by promoting the use of passphrases

What is this about?
Lengthy passwords, such as passphrases, are much more likely to integrate human randomness: easy to remember, yet almost impossible for an automated system to make sense of when properly done. As usual, xkcd got it right.

Why is it a challenge ?
While passphrases are a simplification on paper, especially if complexity requirements are dropped, they’re also a new paradigm for most end-user. Let’s face it: 24 characters password sound scary and users are clearly reluctant to commit to this. We’ve tested this on a few friends: after some enthusiastic explanation from our part, they agreed to switch.
For the first few days, our names were accompanied with words that weren’t exactly kind. After a week, the cursing had disappeared and they got used to typing long passwords, often several times to get it right. With locked out increasingly replaced by password throttling, frustration was luckily enough not turned into user being locked out.
But only a few passwords were changed: replacing all passwords in use meant inventing tens of completely new password, based on a completely new reasoning.

So what ?
This tells us that Awareness and communication is needed to make mentality evolve. Maybe re-using some of the good Belgian material of our friends at safeonweb.be. Even like that, you may wish to focus your effort on one specific password – typically, their Windows password.
But this also tells us that users should only have to remember 4 or 5 passwords: the rest should be in a password vault. Here too, it’s about changing users habit. And again, this works fine until you want to connect from another device than the one hosting the vault. Who said Cloud? But that’s another debate.

3. If it’s still a secret, why change it?

What is this about ?
NIST has gone bold on the advice: only change password if you think (or know) it’s compromised. Don’t have them recurrently expire, this is exactly how passwords become predictable. Of course, this only works if other NIST recommendations are implemented, especially increased length and blacklist check.

What’s the challenge ?
It’s like Perfect Information of consumers in economics: in theory, we should all know everything. But look around and you’ll see it may take months or years to find out your users’ passwords were stolen – not to mention they might have been using that password all over the internet.

So what ?
The best friend of “no expiration” is “second factor”, making sure the memorized secret alone won’t let them in. Of course, with cost of these things and their inherent complexity, you’ll probably select risk-based on which layers and Apps you want to implement it – or even better, go for a common authentication portal that supports adaptive authentication.

What does this all tell us, then ?

That the world is moving to user-friendly security, at last. And the best part is: it’s doing it because old security didn’t work. But it also tells us that these things will be complex to implement, because systems are not ready, implementation will prove complex, and users have to unlearn what we’ve spent the last 20 years pushing into their brains.

This is essentially what our colleague Benoit said on TV a few weeks ago, in case you missed it, you can watch it here.

4B912252