Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

Wednesday, August 17, 2016

Protecting Android with more Linux kernel defenses

Protecting Android with more Linux kernel defenses



Posted by Jeff Vander Stoep, Android Security team



Android relies heavily on the Linux kernel for enforcement of its security
model. To better protect the kernel, we’ve enabled a number of mechanisms within
Android. At a high level these protections are grouped into two
categories—memory protections and attack surface reduction.

Memory protections



One of the major security features provided by the kernel is memory protection
for userspace processes in the form of address space separation. Unlike
userspace processes, the kernel’s various tasks live within one address space
and a vulnerability anywhere in the kernel can potentially impact unrelated
portions of the system’s memory. Kernel memory protections are designed to
maintain the integrity of the kernel in spite of vulnerabilities.

Mark memory as read-only/no-execute



This feature segments kernel memory into logical sections and sets restrictive
page access permissions on each section. Code is marked as read only + execute.
Data sections are marked as no-execute and further segmented into read-only and
read-write sections. This feature is enabled with config option
CONFIG_DEBUG_RODATA. It was put together by Kees Cook and is based on a subset
of Grsecurity’s KERNEXEC feature by Brad
Spengler and Qualcomm’s CONFIG_STRICT_MEMORY_RWX feature by Larry Bassel and
Laura Abbott. CONFIG_DEBUG_RODATA landed in the upstream kernel for arm/arm64
and has been backported to Android’s 3.18+ arm/href="https://android-review.googlesource.com/#/c/174947/">arm64 common
kernel.

Restrict kernel access to userspace



This feature improves protection of the kernel by preventing it from directly
accessing userspace memory. This can make a number of attacks more difficult
because attackers have significantly less control over kernel memory
that is executable, particularly with CONFIG_DEBUG_RODATA enabled. Similar
features were already in existence, the earliest being Grsecurity’s UDEREF. This
feature is enabled with config option CONFIG_CPU_SW_DOMAIN_PAN and was
implemented by Russell King for ARMv7 and backported to href="https://android-review.googlesource.com/#/q/topic:sw_PAN">Android’s
4.1
kernel by Kees Cook.

Improve protection against stack buffer overflows



Much like its predecessor, stack-protector, stack-protector-strong protects
against stack
buffer overflows
, but additionally provides coverage for href="https://outflux.net/blog/archives/2014/01/27/fstack-protector-strong/">more
array types
, as the original only protected character arrays.
Stack-protector-strong was implemented by Han Shen and href="https://gcc.gnu.org/ml/gcc-patches/2012-06/msg00974.html">added to the gcc
4.9 compiler
.


Attack surface reduction



Attack surface reduction attempts to expose fewer entry points to the kernel
without breaking legitimate functionality. Reducing attack surface can include
removing code, removing access to entry points, or selectively exposing
features.

Remove default access to debug features



The kernel’s perf system provides infrastructure for performance measurement and
can be used for analyzing both the kernel and userspace applications. Perf is a
valuable tool for developers, but adds unnecessary attack surface for the vast
majority of Android users. In Android Nougat, access to perf will be blocked by
default. Developers may still access perf by enabling developer settings and
using adb to set a property: “adb shell setprop security.perf_harden 0”.


The patchset for blocking access to perf may be broken down into kernel and
userspace sections. The href="https://android-review.googlesource.com/#/c/234573/">kernel patch is
by Ben Hutchings and is
derived from Grsecurity’s CONFIG_GRKERNSEC_PERF_HARDEN by Brad Spengler. The
userspace changes were href="https://android-review.googlesource.com/#/q/topic:perf_harden">contributed
by Daniel Micay
. Thanks to href="https://conference.hitb.org/hitbsecconf2016ams/sessions/perf-from-profiling-to-kernel-exploiting/">Wish
Wu
and others for responsibly disclosing security vulnerabilities in perf.

Restrict app access to ioctl commands



Much of Android security model is described and enforced by SELinux. The ioctl()
syscall represented a major gap in the granularity of enforcement via SELinux.
Ioctl command
whitelisting with SELinux
was added as a means to provide per-command
control over the ioctl syscall by SELinux.


Most of the kernel vulnerabilities reported on Android occur in drivers and are
reached using the ioctl syscall, for example href="https://source.android.com/security/bulletin/2016-03-01.html#elevation_of_privilege_vulnerability_in_mediatek_wi-fi_kernel_driver">CVE-2016-0820.
Some ioctl commands are needed by third-party applications, however most are not
and access can be restricted without breaking legitimate functionality. In
Android Nougat, only a small whitelist of socket ioctl commands are available to
applications. For select devices, applications’ access to GPU ioctls has been
similarly restricted.

Require seccomp-bpf



Seccomp provides an additional sandboxing mechanism allowing a process to
restrict the syscalls and syscall arguments available using a configurable
filter. Restricting the availability of syscalls can dramatically cut down on
the exposed attack surface of the kernel. Since seccomp was first introduced on
Nexus devices in Lollipop, its availability across the Android ecosystem has
steadily improved. With Android Nougat, seccomp support is a requirement for all
devices. On Android Nougat we are using seccomp on the mediaextractor and
mediacodec processes as part of the href="http://android-developers.blogspot.com/2016/05/hardening-media-stack.html">media
hardening effort
.

Ongoing efforts



There are other projects underway aimed at protecting the kernel:


Due to these efforts and others, we expect the security of the kernel to
continue improving. As always, we appreciate feedback on our work and welcome
suggestions for how we can improve Android. Contact us at href="mailto:security@android.com">security@android.com.

Wednesday, July 20, 2016

Strictly Enforced Verified Boot with Error Correction


Posted by Sami Tolvanen, Software Engineer


Overview



Android uses multiple layers of protection to keep users safe. One of these
layers is verified
boot, which improves security by using cryptographic integrity checking to
detect changes to the operating system. Android has href="https://g.co/ABH">alerted about system integrity since Marshmallow,
but starting with devices first shipping with Android 7.0, we require verified
boot to be strictly enforcing. This means that a device with a corrupt boot
image or verified partition will not boot or will boot in a limited capacity
with user consent. Such strict checking, though, means that non-malicious data
corruption, which previously would be less visible, could now start affecting
process functionality more.


By default, Android verifies large partitions using the dm-verity kernel driver,
which divides the partition into 4 KiB blocks and verifies each block when read,
against a signed hash tree. A detected single byte corruption will therefore
result in an entire block becoming inaccessible when dm-verity is in enforcing
mode, leading to the kernel returning EIO errors to userspace on verified
partition data access.


This post describes our work in improving dm-verity robustness by introducing
forward error correction (FEC), and explains how this allowed us to make the
operating system more resistant to data corruption. These improvements are
available to any device running Android 7.0 and this post reflects the default
implementation in AOSP that we ship on our Nexus devices.

Error-correcting codes



Using forward error correction, we can detect and correct errors in source data
by shipping redundant encoding data generated using an error-correcting code.
The exact number of errors that can be corrected depends on the code used and
the amount of space allocated for the encoding data.


href="https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction">Reed-Solomon
is one of the most commonly used error-correcting code families, and is readily
available in the Linux kernel, which makes it an obvious candidate for
dm-verity. These codes can correct up to ⌊t/2⌋ unknown errors and up to
t known errors, also called href="https://en.wikipedia.org/wiki/Erasure_code">erasures, when t
encoding symbols are added.


A typical RS(255, 223) code that generates 32 bytes of encoding data for every
223 bytes of source data can correct up to 16 unknown errors in each 255 byte
block. However, using this code results in ~15% space overhead, which is
unacceptable for mobile devices with limited storage. We can decrease the space
overhead by sacrificing error correction capabilities. An RS(255, 253) code can
correct only one unknown error, but also has an overhead of only 0.8%.



An additional complication is that block-based storage corruption often occurs
for an entire block and sometimes spans multiple consecutive blocks. Because
Reed-Solomon is only able to recover from a limited number of corrupted bytes
within relatively short encoded blocks, a naive implementation is not going to
be very effective without a huge space overhead.

Recovering from consecutive corrupted blocks



In the changes we made to href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a739ff3f543afbb4a041c16cd0182c8e8d366e70">dm-verity
for Android 7.0, we used a technique called interleaving to allow us to recover
not only from a loss of an entire 4 KiB source block, but several consecutive
blocks, while significantly reducing the space overhead required to achieve
usable error correction capabilities compared to the naive implementation.


Efficient interleaving means mapping each byte in a block to a separate
Reed-Solomon code, with each code covering N bytes across the corresponding N
source blocks. A trivial interleaving where each code covers a consecutive
sequence of N blocks already makes it possible for us to recover from the
corruption of up to (255 - N) / 2 blocks, which for RS(255, 223) would
mean 64 KiB, for example.


An even better solution is to maximize the distance between the bytes covered by
the same code by spreading each code over the entire partition, thereby
increasing the maximum number of consecutive corrupted blocks an RS(255, N) code
can handle on a partition consisting of T blocks to ⌈T/N⌉ × (255 -
N) / 2
.




Interleaving with distance D and block size B.


An additional benefit of interleaving, when combined with the integrity
verification already performed by dm-verity, is that we can tell exactly where
the errors are in each code. Because each byte of the code covers a different
source block—and we can verify the integrity of each block using the existing
dm-verity metadata—we know which of the bytes contain errors. Being able to
pinpoint erasure locations allows us to effectively double our error correction
performance to at most ⌈T/N⌉ × (255 - N) consecutive blocks.


For a ~2 GiB partition with 524256 4 KiB blocks and RS(255, 253), the maximum
distance between the bytes of a single code is 2073 blocks. Because each code
can recover from two erasures, using this method of interleaving allows us to
recover from up to 4146 consecutive corrupted blocks (~16 MiB). Of course, if
the encoding data itself gets corrupted or we lose more than two of the blocks
covered by any single code, we cannot recover anymore.


While making error correction feasible for block-based storage, interleaving
does have the side effect of making decoding slower, because instead of reading
a single block, we need to read multiple blocks spread across the partition to
recover from an error. Fortunately, this is not a huge issue when combined with
dm-verity and solid-state storage as we only need to resort to decoding if a
block is actually corrupted, which still is rather rare, and random access reads
are relatively fast even if we have to correct errors.

Conclusion



Strictly enforced verified boot improves security, but can also reduce
reliability by increasing the impact of disk corruption that may occur on
devices due to software bugs or hardware issues.


The new error correction feature we developed for dm-verity makes it possible
for devices to recover from the loss of up to 16-24 MiB of consecutive blocks
anywhere on a typical 2-3 GiB system partition with only 0.8% space overhead and
no performance impact unless corruption is detected. This improves the security
and reliability of devices running Android 7.0.

Friday, July 8, 2016

Changes to Trusted Certificate Authorities in Android Nougat

Changes to Trusted Certificate Authorities in Android Nougat

Posted by Chad Brubaker, Android Security team




In Android Nougat, we’ve changed how Android handles trusted certificate
authorities (CAs) to provide safer defaults for secure app traffic. Most apps
and users should not be affected by these changes or need to take any action.
The changes include:


  • Safe and easy APIs to trust custom CAs.
  • Apps that target API Level 24 and above no longer trust user or admin-added
    CAs for secure connections, by default.
  • All devices running Android Nougat offer the same standardized set of system
    CAs—no device-specific customizations.


For more details on these changes and what to do if you’re affected by them,
read on.


Safe and easy APIs



Apps have always been able customize which certificate authorities they trust.
However, we saw apps making mistakes due to the complexities of the Java TLS
APIs. To address this we href="https://developer.android.com/preview/features/security-config.html">improved
the APIs for customizing trust.


User-added CAs



Protection of all application data is a key goal of the Android application
sandbox. Android Nougat changes how applications interact with user- and
admin-supplied CAs. By default, apps that target API level 24 will—by design—not
honor such CAs unless the app explicitly opts in. This safe-by-default setting
reduces application attack surface and encourages consistent handling of network
and file-based application data.


Customizing trusted CAs



Customizing the CAs your app trusts on Android Nougat is easy using the Network
Security Config. Trust can be specified across the whole app or only for
connections to certain domains, as needed. Below are some examples for trusting
a custom or user-added CA, in addition to the system CAs. For more examples and
details, see href="https://developer.android.com/preview/features/security-config.html">the
full documentation.


Trusting custom CAs for debugging



To allow your app to trust custom CAs only for local debugging, include
something like this in your Network Security Config. The CAs will only be
trusted while your app is marked as debuggable.


<network-security-config>  
<debug-overrides>
<trust-anchors>
<!-- Trust user added CAs while debuggable only -->
<certificates src="user" />
</trust-anchors>
</domain-config>
</network-security-config>


Trusting custom CAs for a domain



To allow your app to trust custom CAs for a specific domain, include something
like this in your Network Security Config.



<network-security-config>  
<domain-config>
<domain includeSubdomains="true">internal.example.com</domain>
<trust-anchors>
<!-- Only trust the CAs included with the app
for connections to internal.example.com -->
<certificates src="@raw/cas" />
</trust-anchors>
</domain-config>
</network-security-config>


Trusting user-added CAs for some domains



To allow your app to trust user-added CAs for multiple domains, include
something like this in your Network Security Config.



<network-security-config>  
<domain-config>
<domain includeSubdomains="true">userCaDomain.com</domain>
<domain includeSubdomains="true">otherUserCaDomain.com</domain>
<trust-anchors>
<!-- Trust preinstalled CAs -->
<certificates src="system" />
<!-- Additionally trust user added CAs -->
<certificates src="user" />
</trust-anchors>
</domain-config>
</network-security-config>


Trusting user-added CAs for all domains except some



To allow your app to trust user-added CAs for all domains, except for those
specified, include something like this in your Network Security Config.



<network-security-config>  
<base-config>
<trust-anchors>
<!-- Trust preinstalled CAs -->
<certificates src="system" />
<!-- Additionally trust user added CAs -->
<certificates src="user" />
</trust-anchors>
</base-config>
<domain-config>
<domain includeSubdomains="true">sensitive.example.com</domain>
<trust-anchors>
<!-- Only allow sensitive content to be exchanged
with the real server and not any user or
admin configured MiTMs -->
<certificates src="system" />
<trust-anchors>
</domain-config>
</network-security-config>


Trusting user-added CAs for all secure connections



To allow your app to trust user-added CAs for all secure connections, add this
in your Network Security Config.



<network-security-config>  
<base-config>
<trust-anchors>
<!-- Trust preinstalled CAs -->
<certificates src="system" />
<!-- Additionally trust user added CAs -->
<certificates src="user" />
</trust-anchors>
</base-config>
</network-security-config>


Standardized set of system-trusted CAs



To provide a more consistent and more secure experience across the Android
ecosystem, beginning with Android Nougat, compatible devices trust only the
standardized system CAs maintained in href="https://android.googlesource.com/platform/system/ca-certificates/">AOSP.



Previously, the set of preinstalled CAs bundled with the system could vary from
device to device. This could lead to compatibility issues when some devices did
not include CAs that apps needed for connections as well as potential security
issues if CAs that did not meet our security requirements were included on some
devices.


What if I have a CA I believe should be included on Android?



First, be sure that your CA needs to be included in the system. The preinstalled
CAs are only for CAs that meet our security requirements
because they affect the secure connections of most apps on the device. If you
need to add a CA for connecting to hosts that use that CA, you should instead
customize your apps and services that connect to those hosts. For more
information, see the Customizing trusted CAs section above.



If you operate a CA that you believe should be included in Android, first
complete the Mozilla CA
Inclusion Process
and then file a href="https://code.google.com/p/android/issues/entry">feature request
against Android to have the CA added to the standardized set of system CAs.


Friday, June 17, 2016

One Year of Android Security Rewards

One Year of Android Security Rewards





A year ago, we added Android Security Rewards to the long standing Google Vulnerability Rewards Program. We offered up to $38,000 per report that we used to fix vulnerabilities and protect Android users.



Since then, we have received over 250 qualifying vulnerability reports from researchers that have helped make Android and mobile security stronger. More than a third of them were reported in Media Server which has been hardened in Android N to make it more resistant to vulnerabilities.



While the program is focused on Nexus devices and has a primary goal of improving Android security, more than a quarter of the issues were reported in code that is developed and used outside of the Android Open Source Project. Fixing these kernel and device driver bugs helps improve security of the broader mobile industry (and even some non-mobile platforms).



By the Numbers



Here’s a quick rundown of the Android VRP’s first year:




  • We paid over $550,000 to 82 individuals. That’s an average of $2,200 per reward and $6,700 per researcher.

  • We paid our top researcher, @heisecode, $75,750 for 26 vulnerability reports.

  • We paid 15 researchers $10,000 or more.

  • There were no payouts for the top reward for a complete remote exploit chain leading to TrustZone or Verified Boot compromise.



Thank you to those who submitted high quality vulnerability reports to us last year.




Improvements to Android VRP





We’re constantly working to improve the program and today we’re making a few changes to all vulnerability reports filed after June 1, 2016.





We’re paying more!



  • We will now pay 33% more for a high-quality vulnerability report with proof of concept. For example, the reward for a Critical vulnerability report with a proof of concept increased from $3000 to $4000.

  • A high quality vulnerability report with a proof of concept, a CTS Test, or a patch will receive an additional 50% more.

  • We’re raising our rewards for a remote or proximal kernel exploit from $20,000 to $30,000.

  • A remote exploit chain or exploits leading to TrustZone or Verified Boot compromise increase from $30,000 to $50,000.



All of the changes, as well as the additional terms of the program, are explained in more detail in our Program Rules. If you’re interested in helping us find security vulnerabilities, take a look at Bug Hunter University and learn how to submit high quality vulnerability reports. Remember, the better the report, the more you’ll get paid. We also recently updated our severity ratings, so make sure to check those out, too.





Thank you to everyone who helped us make Android safer. Together, we made a huge investment in security research that has made Android stronger. We’re just getting started and are looking forward to doing even more in the future.

Friday, June 10, 2016

Security "Crypto" provider deprecated in Android N

Posted by Sergio Giro, software engineer



random_droid


If your Android app derives keys using the SHA1PRNG algorithm from the Crypto
provider, you must start using a real key derivation function and possibly re-encrypt your data.



The Java Cryptography Architecture allows developers to create an instance of a class like a cipher, or a pseudo-random number generator, using calls like:



SomeClass.getInstance("SomeAlgorithm", "SomeProvider");


Or simply:



SomeClass.getInstance("SomeAlgorithm");


For instance,



Cipher.getInstance(“AES/CBC/PKCS5PADDING”);


SecureRandom.getInstance(“SHA1PRNG”);


On Android, we don’t recommend specifying the provider. In general, any call to
the Java Cryptography Extension (JCE) APIs specifying a provider should only be
done if the provider is included in the application or if the application is
able to deal with a possible ProviderNotFoundException.



Unfortunately, many apps depend on the now removed “Crypto” provider for an
anti-pattern of key derivation.



This provider only provided an implementation of the algorithm “SHA1PRNG” for
instances of SecureRandom. The problem is that the SHA1PRNG algorithm is not
cryptographically strong. For readers interested in the details, On
statistical distance based testing of pseudo random sequences and experiments
with PHP and Debian OpenSSL
,Section 8.1, by Yongge Want and Tony Nicol,
states that the “random” sequence, considered in binary form, is biased towards
returning 0s, and that the bias worsens depending on the seed.



As a result, in Android N we are deprecating the
implementation of the SHA1PRNG algorithm and the Crypto provider altogether
.
We’d previously covered the issues with using SecureRandom for key derivation a
few years ago in href=http://android-developers.blogspot.com/2013/02/using-cryptography-to-store-credentials.html>Using
Cryptography to Store Credentials Safely. However, given its continued use,
we will revisit it here.



A common but incorrect usage of this provider was to derive keys for encryption
by using a password as a seed. The implementation of SHA1PRNG had a bug that
made it deterministic if setSeed() was called before obtaining output. This bug
was used to derive a key by supplying a password as a seed, and then using the
"random" output bytes for the key (where “random” in this sentence means
“predictable and cryptographically weak”). Such a key could then be used to
encrypt and decrypt data.



In the following, we explain how to derive keys correctly, and how to decrypt
data that has been encrypted using an insecure key. There’s also a
full example
, including a helper class to use the deprecated SHA1PRNG
functionality, with the sole purpose of decrypting data that would be otherwise
unavailable.



Keys can be derived in the following way:




  • If you're reading an AES key from disk, just store the actual key and don't go through this weird dance. You can get a SecretKey for AES usage from the bytes by doing:


    SecretKey key = new SecretKeySpec(keyBytes, "AES");


  • If you're using a password to derive a key, follow href="http://nelenkov.blogspot.com/2012/04/using-password-based-encryption-on.html">Nikolay Elenkov's excellent tutorial with the caveat that a good rule of thumb is the salt size should be the same size as the key output. It looks like this:



   /* User types in their password: */  
String password = "password";

/* Store these things on disk used to derive key later: */
int iterationCount = 1000;
int saltLength = 32; // bytes; should be the same size
as the output (256 / 8 = 32)
int keyLength = 256; // 256-bits for AES-256, 128-bits for AES-128, etc
byte[] salt; // Should be of saltLength

/* When first creating the key, obtain a salt with this: */
SecureRandom random = new SecureRandom();
byte[] salt = new byte[saltLength];
random.nextBytes(salt);

/* Use this to derive the key from the password: */
KeySpec keySpec = new PBEKeySpec(password.toCharArray(), salt,
iterationCount, keyLength);
SecretKeyFactory keyFactory = SecretKeyFactory
.getInstance("PBKDF2WithHmacSHA1");
byte[] keyBytes = keyFactory.generateSecret(keySpec).getEncoded();
SecretKey key = new SecretKeySpec(keyBytes, "AES");



That's it. You should not need anything else.



To make transitioning data easier, we covered the case of developers that have
data encrypted with an insecure key, which is derived from a password every
time. You can use the helper class InsecureSHA1PRNGKeyDerivator in
the example app
to derive the key.



 private static SecretKey deriveKeyInsecurely(String password, int
keySizeInBytes) {
byte[] passwordBytes = password.getBytes(StandardCharsets.US_ASCII);
return new SecretKeySpec(
InsecureSHA1PRNGKeyDerivator.deriveInsecureKey(
passwordBytes, keySizeInBytes),
"AES");
}


You can then re-encrypt your data with a securely derived key as explained
above, and live a happy life ever after.



Note 1: as a temporary measure to keep apps working, we decided to still create
the instance for apps targeting SDK version 23, the SDK version for Marshmallow,
or less. Please don't rely on the presence of the Crypto provider in the Android
SDK, our plan is to delete it completely in the future.



Note 2: Because many parts of the system assume the existence of a SHA1PRNG
algorithm, when an instance of SHA1PRNG is requested and the provider is not
specified we return an instance of OpenSSLRandom, which is a strong source of
random numbers derived from OpenSSL.




Friday, May 6, 2016

Hardening the media stack

Posted by Dan Austin and Jeff Vander Stoep, Android Security team



To help make Android more secure, we encourage and reward researchers who discover vulnerabilities. In 2015, a series of bugs in
mediaserver’s libstagefright were disclosed to Google. We released updates for
these issues with our August and September 2015 security bulletins.



In addition to addressing issues on a monthly basis, we’ve also been working on
new security features designed to enhance the existing security model and
provide additional defense in-depth. These defense measures attempt to achieve
two goals:




  • Prevention: Stop bugs from becoming vulnerabilities
  • Containment: Protect the system by de-privileging and isolating components that handle
    untrusted content



Prevention




Most of the vulnerabilities found in libstagefright were heap overflows
resulting from unsigned integer overflows. A number of integer overflows in libstagefright allowed an attacker to
allocate a buffer with less space than necessary for the incoming data,
resulting in a buffer overflow in the heap.



The result of an unsigned integer overflow is well defined, but the ensuing
behavior could be unexpected or unsafe. In contrast, signed integer overflows
are considered undefined behavior in C/C++, which means the result of an
overflow is not guaranteed, and the compiler author may choose the resulting
behavior—typically what is fastest or simplest. We have added compiler changes
that are designed to provide safer defaults for both signed and unsigned
integer overflows.



The UndefinedBehaviorSanitizer (UBSan) is part of the LLVM/Clang compiler toolchain that detects undefined or
unintended behavior. UBSan can check for multiple types of undefined and unsafe
behavior, including signed and unsigned integer overflow. These checks add code
to the resulting executable, testing for integer overflow conditions during
runtime. For example, figure 1 shows source code for the parseChunk function in the MPEG4Extractor component of libstagefright after the original researcher-supplied patch was
applied. The modification, which is contained in the black box below, appears
to prevent integer overflows from occurring. Unfortunately, while SIZE_MAX and size are 32-bit values, chunk_size is a 64-bit value, resulting in an incomplete check and the potential for
integer overflow. In the line within the red box, the addition of size and chunk_size may result in an integer overflow and creation of buffer smaller than size elements. The subsequent memcpy could then lead to exploitable memory corruption, as size + chunk_size could be less than size, which is highlighted in the blue box. The mechanics of a potential exploit
vector for this vulnerability are explained in more detail by Project Zero.





Figure 1. Source code demonstrating a subtle unsigned integer overflow.



Figure 2 compares assembly generated from the code segment above with a second
version compiled with integer sanitization enabled. The add operation that
results in the integer overflow is contained in the red box.



In the unsanitized version, size (r6) and chunk_size (r7) are added together, potentially resulting in r0 overflowing and being less than size. Then, buffer is allocated with the size specified in r0, and size bytes are copied to it. If r0 is less than r6, this results in memory corruption.



In the sanitized version, size (r7) and chunk_size (r5) are added together with the result stored in r0. Later, r0 is checked against r7, if r0 is less than r7, as indicated by the CC condition code, r3 is set to 1. If r3 is 1, and the carry bit was set, then an integer overflow occurred, and an
abort is triggered, preventing memory corruption.



Note that the incomplete check provided in the patch was not included in figure
2. The overflow occurs in the buffer allocation’s add operation. This addition triggers an integer sanitization check, which turns
this exploitable flaw into a harmless abort.





Figure 2. Comparing unsanitized and sanitized compiler output.



While the integer sanitizers were originally intended as code hygiene tools,
they effectively prevent the majority of reported libstagefright
vulnerabilities. Turning on the integer overflow checks was just the first
step. Preventing the runtime abort by finding and fixing integer overflows,
most of which are not exploitable, represented a large effort by Android's
media team. Most of the discovered overflows were fixed and those that remain
(mostly for performance reasons) were verified and marked as safe to prevent
the runtime abort.



In Android N, signed and unsigned integer overflow detection is enabled on the
entire media stack, including libstagefright. This makes it harder to exploit
integer overflows, and also helps to prevent future additions to Android from
introducing new integer overflow bugs.



Containment




For Android M and earlier, the mediaserver process in Android was responsible
for most media-related tasks. This meant that it required access to all
permissions needed by those responsibilities and, although mediaserver ran in
its own sandbox, it still had access to a lot of resources and capabilities.
This is why the libstagefright bugs from 2015 were significant—mediaserver
could access several important resources on an Android device including camera,
microphone, graphics, phone, Bluetooth, and internet.



A root cause analysis showed that the libstagefright bugs primarily occurred in
code responsible for parsing file formats and media codecs. This is not
surprising—parsing complex file formats and codecs while trying to optimize for
speed is hard, and the large number of edge cases makes such code susceptible
to both accidental and malicious malformed inputs.



However, media parsers do not require access to most of the privileged
permissions held by mediaserver. Because of this, the media team re-architected
mediaserver in Android N to better adhere to the principle of least privilege.
Figure 3 illustrates how the monolithic mediaserver and its permissions have
been divided, using the following heuristics:




  • parsing code moved into unprivileged sandboxes that have few or no permissions
  • components that require sensitive permissions moved into separate sandboxes
    that only grant access to the specific resources the component needs. For
    example, only the cameraserver may access the camera, only the audioserver may
    access Bluetooth, and only the drmserver may access DRM resources.




Figure 3. How mediaserver and its permissions have been divided in Android N.



Comparing the potential impact of the libstagefright bugs on Android N and
older versions demonstrates the value of this strategy. Gaining code execution
in libstagefright previously granted access to all the permissions and
resources available to the monolithic mediaserver process including graphics
driver, camera driver, or sockets, which present a rich kernel attack surface.



In Android N, libstagefright runs within the mediacodec sandbox with access to
very few permissions. Access to camera, microphone, photos, phone, Bluetooth,
and internet as well as dynamic code loading are disallowed by SELinux. Interaction with the kernel is further restricted by seccomp. This means that compromising libstagefright would grant the attacker access
to significantly fewer permissions and also mitigates privilege escalation by
reducing the attack surface exposed by the kernel.



Conclusion




The media hardening project is an ongoing effort focused on moving
functionality into less privileged sandboxes and further reducing the
permissions granted to those sandboxes. While the techniques discussed here
were applied to the Android media framework, they are suitable across the
Android codebase. These hardening techniques—and others—are being actively
applied to additional components within Android. As always, we appreciate
feedback on our work and welcome suggestions for how we can improve Android.
Contact us at security@android.com.


Friday, April 29, 2016

Enhancing App Security on Google Play

Posted by Eric Davis, Android Security Team



We’re constantly investing in new tools and services to help developers build secure Android applications. This includes the application sandbox and Security APIs in the platform, Security APIs in Google Play Services, and even open source testing tools. Last year, Google Play also helped developers enhance the security of their applications by looking directly at the code they’ve written and offering suggestions for improvements.



The Google Play App Security Improvement Program is the first of its kind. It has two core components: We provide developers with security tips to help them build more secure apps, and we help developers identify potential security enhancements when uploaded to Google Play. This week, to help educate developers, Kristian Monsen, one of our engineers, gave a presentation about security best practices at the Samsung Developer Conference. And in 2015, we worked with developers to improve the security of over 100,000 apps through the program.



How it works



Before any app is accepted into Google Play, it is scanned for safety and security, including potential security issues. We also continuously re-scan the over one million apps in Google Play for additional threats.



If your app is flagged for a potential security issue, you will be notified immediately to help you quickly address the issue and help keep your users safe. We’ll deliver alerts to you using both email and the Google Play Developer Console, with links to a support page with details about how to improve the app.






Typically, these notifications will include a timeline for delivering the improvement to users as quickly as possible. Applications may be required to make security improvements before any other app updates can be be published.




You can confirm that you’ve fully addressed the issue by uploading the new version of your app to the Google Play Developer Console. Be sure to increment the version number of the fixed app. After a few hours, check the Developer Console for the security alert; if it’s no longer there, you’re all set!



The success of this program rests on our partnership with you—the developers of apps on Google Play—and the security community. We’re all responsible for providing safe, secure apps to our users. For feedback or questions, please reach out to us through the Google Play Developer Help Center. To report potential security issues in apps, please reach out to us at security+asi@android.com.