Monthly Archives: March 2013

ApkScan beta released

Yesterday we released the first version of ApkScan! For those who can’t wait to run some of your Android applications through the scanners, ApkScan can be found at http://apkscan.nviso.be/. Two example reports generated by ApkScan can be found here and here. More details after the screenshot.

As we mentioned during a previous blog post, ApkScan allows you to scan Android packages for malicious activity. For this analysis, we use a combination of static and dynamic scanning techniques. Although we are planning on continuously adding and improving existing and new scanning methods, the current version of ApkScan already performs the following analysis:

 Static analysis

  • Analysis of AndroidManifest.xml
    • Registered permissions
    • Registered services
  • Analysis of disassembled source code
    •   Extract hard-coded URL’s

Dynamic analysis

  • Behavioral analysis using Droidbox
    • Behavior graphs
    • Placed phone calls
    • Sent SMS messages
    • Cryptographic activity
    • Information leakage (network / SMS / file)

External services

  • Virus scan of original samples using the VirusTotal API
  • URL scan of hard-coded URL’s using Google Safe Browsing API

In order to support these scanning features, we have implemented our own ApkScan API client. This client fetches the uploaded samples in a BackTrack machine, and analyses and runs the samples in a sandboxed environment.

We look forward to your feedback and suggestions! We will be posting more updates soon so keep an eye out on this blog!
 

Mitigation strategies against cyber threats

So it’s been a good 2 months since we have been in business! We thought we’d to take some time to reflect on these two months, in which we’ve seen quite some interesting security news including the well-known Mandiant report on APT1 and the widespread Java chaos.


Last week, ENISA published a “Flash Note” on Cyber Attacks, urging organizations to take precautions to prevent cyber attacks. In order to protect against cyber threats, they provide the following recommendations:

  • Protect targets to avoid weaknesses being exploited by adversaries: prevention should be the primary defence against attacks.
  • Consider the use of more secure communication channels than e-mail in order to protect users from spoofing and phishing attacks.
  • Proactively reduce your attack surface by reducing the complexity of software installed on user devices and reducing the permissions of users to access other devices, services and applications by applying the principle of least privilege.
While we believe these are good recommendations, we are also convinced that it is often not easy for organizations to practically implement this type of recommendations. 

An interesting, more practical perspective on defending against cyber threats is provided by Australia’s DSD (Defence Signals Directorate). In the same spirit of the “SANS Top 20 Critical Controls“, the DSD has summarized a top 35 of mitigation strategies to protect against cyber threats – a ranking based on DSD’s analysis of reported security incidents and vulnerabilities discovered on Australia’s government networks.

That top 35 is of course just another large list of security controls that should be implemented. Interestingly enough however, the DSD noticed that 85% of all cyber threats can be mitigated by only implementing the top 4 of these 35 strategies. This top 4 boils down to the following controls:

Let’s face it: with malware developing at the pace it currently does, it is unrealistic to expect that antivirus solutions will offer in-depth protection against every single malware infection. They are a good and necessary first step, detecting older and known malware, but the most effective method is not trying to detect malware: it is to allow only known, valid programs to run.

While application whitelisting clearly is the “way to go” with regards to security, it’s often not an easy choice for various reasons (“Damn you IT, this is not workable!”, “We are a company with an open culture, we don’t lock down our desktops”, “We trust our employees”,…). If you ever manage to convince your organization to consider application whitelisting, another daunting challenge awaits the IT administrator: creating and managing the application white list.

Now, establishing what ‘valid’ applications are for your environment is no easy task. Next to the usual client-side applications such as PDF readers and other office software, the list of approved software will also include applications specific to your environment. So how do you about creating / managing this type of whitelist? Here’s a couple of ideas:

  • To create your initial white list, a good idea would be to use a normal employee workstation and list all of the installed applications. In a reasonably static environment, this could be a good approach, seeing as normal employees wouldn’t be able to install new applications themselves anyhow.
  • A less “strict” approach would be the use of “software signing”. You could protect your environment by only allowing software signed by trusted developers / vendors to be installed.

If you cannot implement application whitelisting in a “blocking” mode, consider configuring it to only monitor (not block) applications that are not on the white list. This will provide you with additional insights on the software running in your environment, without further restricting your employee workstations.


When it comes down to tooling, there’s several commercial solutions available. For typical large Windows environments, a good solution is AppLocker which further builds on the older “Software Restriction Policies”.

This is where it all starts. We are still seeing too many organizations that lag behind when it comes to the roll-out of security patches for operating systems and server software. 

During the majority of our security assessments and penetration tests, we are frightened by the number of hosts that are missing 6-months old critical security patches. Unfortunately, any one of these critical risks is usually enough to compromise an entire Windows domain.

In order to keep abreast of new security vulnerabilities, organizations should set up an effective patch management process. I’m sure most of you have this type of process, so let’s take a closer look and see whether your process includes:

  • A security patch assessment that verifies to what extent security patches are applicable to the organization and what the actual impact is;
  • A release cycle for security patches that includes proper testing and a controlled release;
  • An emergency process for urgent and critical patches (based on the criticality of the patch, you should consider what the biggest risk is: following the process and increasing your exposure window or immediately applying the patch and risking availability issues?);
  • An overview of all systems and their configuration to ensure all concerned systems are patched;
  • A process for temporary workarounds;
  • A system that provides metrics and monitors the effectiveness of the process itself.
Server-side vulnerabilities usually receive the most attention from organizations, even though patching client applications deserves just as much focus. Using security’s weakest link (humans), attackers often use a combination of trickery (e.g. spear phishing) to exploit vulnerabilities in client applications.

Just consider the following vulnerabilities for JRE (the Java Runtime Environment):


Java Runtime Environment (JRE) vulnerability overview

These vulnerabilities often remain “hidden”, as your typical network vulnerability scanners (e.g. Qualys, NeXpose, Nessus,…) will not pick them up by default.

There’s a couple of ways of identifying what software is running on your client machines:

Network scanners can usually be configured to actually authenticate to hosts in your environment and detect vulnerabilities in the installed client software. Although this type of configuration works well and provides you with the required information, it is something that should be implemented and monitored very carefully. After all, you are providing an automated vulnerability scanner with (administrative) credentials to your client machines. I’m sure most of you had some incidents in the past where a vulnerability scanner did some things you didn’t expect (e.g. use an “unsafe check” that takes down a vulnerable server). Now imagine the havoc a scanner could cause when he has administrative privileges to 90% of your network.

Install “application monitoring” software on your client machines. An alternative to the above approach is to install a small monitoring tool on your client machines that sends information on installed software to a central server. You can then centrally track the different software versions in your network. A possible pitfall here could be privacy issues that would prevent you from installing this type of “monitoring software” on your employee machines, so it’s something to first check with your legal department.

Once a system has been compromised (in any way possible), an attacker will attempt to further escalate his privileges both on the system and inside the victim network. To minimize and to track administrative privileges will help you contain malware infections and compromises (which will eventually occur in any environment).

As a good example of this, many IT organizations have an abundance of administrative privileges attributed to IT staff (e.g. the majority of IT staff has domain administrator privileges). It is understandable that “the business needs to keep running”, but there’s a couple of ways to improve the situation without “crippling” system administrators:

  • Provide system administrators with two accounts: one for normal daily usage and one that can be used for administrative tasks.
  • Review administrative accounts and apply the least privilege principle, i.e. system administrators often require elevated privileges, but not “Domain Administrator”. 
  • Review application accounts and apply the least privilege principle, i.e. do not let your applications run with SYSTEM privileges.
  • Implement and enforce a proper IAM process for administrative accounts that will monitor the spread and usage of administrative accounts.

Conclusion?
Over the years, a number of organizations invested time in creating “security control lists” to assist organization’s in minimizing their exposure to cyber threats. While most of these are good, we believe Australia’s DSD does a very good job in defining 4 concrete and clear mitigation strategies that can be applied to most organizations.

It’s clear that all of these controls deserve a dedicated blog post that provides additional insights in how they could be implemented. We do hope that this post has given you some food for thought and we’ll make sure to regularly provide you with more insights. 

If you’re interested in more NVISO updates and info on how we can help you, please visit www.nviso.be.

References:
Mandiant APT1 report

Introducing ApkScan

Two weeks ago, we presented our Android malware research at the SANS Community Night. One of the presentations discussed the development of ApkScan, a service we are developing to facilitate the distributed malware analysis of Android samples. Although already quite a few (online) malware analysis services already offer Android malware analysis reports, we feel like they have a few shortcomings:

  • The user (and the quality of the reports) fully depends on the techniques used by the malware analysis services to which samples are submitted. Although most services mention the tools they use to perform the analysis, it is still somehow a “black box” for users of the service to know what exactly happens during the analysis. What happens if a user would like to gather analysis information that is not provided by the online service? The user is often left out in the cold, and this brings us to our next point.
  • Existing online malware research services that we reviewed do not allow clients to interface with submitted samples in order to perform their own analysis. Samples are in most cases hashed after which the original binaries are removed (except for some meta-data such as the file name and size). This makes it not only difficult for users to collaborate, it makes the service often useless to malware researchers that are trying to gain access to sample data. Some websites such as Contagio Mobile do a great job at offering samples for research purposes, but it’s often cumbersome finding the sample you need. More importantly, research already performed by others is not linked to the samples — each researcher could have a “different piece of the puzzle” if collaboration is non-intuitive. We understand the concerns linked to storing and offering potentially harmful binaries, but we are convinced that allowing access to such samples in a controlled and more streamlined way would benefit the research community greatly.


Our goal with ApkScan is to solve the above-mentioned shortcomings and provide a way to analyse Android samples using a more distributed, “white box” approach. The architecture of ApkScan was also presented during the talk, and from a high level it looks as following (click for full size):



Overview of the ApkScan architecture


The architecture is made up of two main components: the ApkScan back-end and the distributed ApkScan. The back-end is hosted on our end, and contains the following components:
  • An application server hosting the ApkScan web front-end where users can submit new samples through a browser.
  • A sample daemon that can fetch samples from sources other than uploads through the browser. The idea is to link the sample daemon to app markets (official and malicious), so that samples of interest can be fetched and analysed automatically, without requiring intervention from a user uploading the sample in the web front-end. Due to the large influx of new apps released in these markets, the daemon will be powered by a robust search engine (e.g. “fetch all applications that have been released this month in the Finance category, that have “Online Banking” in their name or description).
  • A RESTful API. Through this API, remote clients can interact with our back-end systems without using the web front-end, making it much more convenient to pull and push data. The API in its current shape is already exposing quite some functionality including fetching samples and reporting data from other clients, as well as pushing new reports to the server.  In order to interact with the API, a token is required. We will hand out tokens in a controlled way upon launching but the API will be accessible to everyone interested in using it. 
  • The ApkScan API clients do the actual work — they fetch new samples that are pending analysis from our back-end, perform the analysis (static, dynamic, whatever the client supports!) and (optionally) resubmit the results to our back-end. As we want to encourage users to submit their analysis results (as a huge point of ApkScan is to facilitate collaboration), we will be working out a way to reward those that do so. Anyone who is into malware research (no matter how complex) can develop a client and benefit. If for example you are interested in Android malware that targets your financial clients, you could implement an API client that analyses samples for the presence of certain strings or URL’s. You get access to samples and existing raw reporting data, and in return you submit the outcome of your research. Win-win.


 The ApkScan web application in developement. It lets users submit Android packages that are then picked up by the API clients for analysis.


 Extract from one of the reports generated by ApkScan.


Our research team at NVISO has currently already developed a first ApkScan API client that performs basic static and dynamic malware analysis on any samples submitted through the web front-end. We use a wide range of (existing and custom) tools for this, including:

  • droidbox for behavioural analysis. We are updating certain parts of the code (including reporting and interaction with the sandbox) to facilitate a more streamlined process for automated analysis. 
  • Our own scripts and tools to perform static analysis (manifest parsing, string hunting, interaction with other online services). The code will be open-sourced upon release.


Our API client is built on Ruby and deployed in a Backtrack machine — however you could use any programming language that can interact with the RESTful API. Over the next coming weeks we will be working on ApkScan with the goal of releasing to the public. When we launch, we will:

  • Launch the ApkScan web application where users can submit samples and view analysis reports.

  • Launch an ApkScan API along with documentation on how to interact with it.

  • Open-source part of the project — including an example ApkScan API client to get you started working on your own client.

We are very curious to hear your input, concerns, suggestions and other comments. Please feel free to leave a comment or get in touch with us through any of the other channels. We are looking forward to releasing ApkScan, keep an eye out on this blog and our twitter for updates!