Skip to content

Instantly share code, notes, and snippets.

  • Save cyfrost/44d37d1cebdbdc20ec46e878f4ff9072 to your computer and use it in GitHub Desktop.
Save cyfrost/44d37d1cebdbdc20ec46e878f4ff9072 to your computer and use it in GitHub Desktop.
iOS, The Future Of macOS, Freedom, Security And Privacy In An Increasingly Hostile Global Environment

iOS, The Future Of macOS, Freedom, Security And Privacy In An Increasingly Hostile Global Environment

This post by a security researcher who prefers to remain anonymous will elucidate concerns about certain problematic decisions Apple has made and caution about future decisions made in the name of “security” while potentially hiding questionable motives. The content of this article represents only the opinion of the researcher. The researcher apologises if any content is seen to be inaccurate, and is open to comments or questions through PGP-encrypted mail.



TL;DR

  1. iOS subliminally and constantly collects sensitive data, links it to hardware identifiers almost guaranteed to link to a real identity

  2. iOS forces users to “activate” devices (including non-cellular) which sets up a remote UUID-linked (also collecting registration IP) database for a given device with Apple for APNS/iMessage/FaceTime/Siri, and then Apple ID, iCloud etc. Apple ought be open to users about “activation” and allow users to avoid it.

  3. Apple Activation servers are accessed via Akamai, which means sensitive data may be cached by Akamai and its’ peering partners' which includes many global ISPs and IXPs

  4. Risk that macOS could be iOS-ified in the near future in the name of “security” while ignoring significant flaws in iOS’ design wrt privacy, forcing users to unnecessarily trust Apple with potentially sensitive data in order to even simply use devices.

  5. Controversial, draconian surveillance laws being implemented worldwide which could take advantage of Apple’s data collection and OS design choices, notably in, but not limited to, China, one of Apple's largest markets.

  6. If iOS is to really be considered a secure OS, and if vanilla macOS is to become more secure, independent end-user control must be considered. Increased low-level design security at the cost of control, and the ability to prevent leaking data, cannot be considered a real improvement in security.



Apple has managed to position itself in an influential position in the global consumer computing space, helping to cement mobile devices’ position in the history of computing. Currently, Apple maintains two main OSes, iOS on ARM iDevices and the recently renamed macOS on x86 laptops and desktops, both relying on the Darwin kernel, with the former’s hardened significantly (see newosxbook.com).

Some of the security features of iOS over macOS will now be briefly covered:

iOS locks down a lot, e.g. relies on a hardened Darwin kernel, disallows the loading of kernel extensions, only allows verified apps downloaded from the App Store to be installed that have (supposedly, despite questions among some in the security community) been audited for malicious behavior, sandboxes apps relatively restrictively, and of course removes the ability to elevate to root. iDevices themselves verify the OS on boot and only allow an OS signed by Apple to be installed. macOS devices to date lack any form of verified boot process and the OS lacks iOS’ hardened kernel or the ability to harden it due to SIP locking kernel parameters to Apple's preset values.

On iOS, there is no full-disk or full-volume encryption, only varying levels of file-based encryption, partially dependent on third-party developer choices, such that what is, and isn’t, encrypted (with encryption tied to the user passphrase) is not always clear to the end-user. However, one major benefit on iOS is that that the encryption is tied to the secure enclave, which should guard against brute-force attacks (assuming there are no major exploits in SEPOS). On macOS, you can use Filevault 2 full-volume encryption with a long passphrase without a recovery key, but are exposed to the risk of brute-force attacks. In this researcher’s humble opinion, full-volume encryption should be separately configurable on top of the file-based encryption, so a separate encryption password is required at boot, with the only functionality being password entry or shutdown. This highlights the problem with Apple’s current approach (on iOS), which generally allows little user control or configurability without a network connection, or interacting with the company in some way. With Apple’s recent decision to move to APFS, the state and details of disk encryption on both OSes is slightly unclear, but hopefully will become clearer when iOS 10.3 is released.

At first glance, iOS seems like the better choice for security and, in many ways it is. iOS devices do, however, rely much more on relatively new, undocumented and/or closed source code than macOS, which makes it hard to establish. From a low-level design perspective, however, iOS is built with security in mind. However, this design, while reducing exploit risk, does mean that users are locked into trusting Apple much more than in macOS, which is an issue given some rather questionable choices by Apple covered further on. Some of these concerns have been discussed previously, e.g. by Jonathan Zdziarski in his 2014 talk Identifying Back Doors, Attack Points, and Surveillance Mechanisms in iOS Devices.

macOS, thanks to the higher level of control given to end-users can be tweaked and set up relatively securely in a more freedom-preserving, privacy-conscious manner, even with the relatively weak, yet still beneficial security feature, System Integrity Protection, enabled that many, perhaps rather hastily, berated as an effort by Apple to prevent end-users from controlling their machines.

Security and privacy and the relationship between them have often been a matter of debate in the security community. This author believes privacy to be a subset of overall security and that to be certain of security one must be able to prevent leaking any potentially sensitive information to untrusted and even partially trusted third parties, and questions the argument some researchers (many working for Apple, Google, etc) have presented that overall security is gained even if such privacy issues are ignored. Plain vanilla installs of both iOS and macOS include some very questionable features and design choices that Apple seems keen to promote which create serious privacy issues - especially on iOS, since there is no practical way to avoid or disable them.

iOS devices (even non-cellular devices) on first boot and, occasionally for unclear reasons after OS upgrades, will require “Activation” and an internet connection to contact an array of Apple servers. So, from the start, you cannot use your device until it has been “activated” in a highly opaque process which gives users no indication of what the device is doing, let alone why it is doing it. Patent US 20090061934A1, “Hauck et al (Apple Inc.) - Service Provider Activation with Subscriber Identity Module Policy”, suggests that during activation, an iOS device, even before being set up with an Apple ID for the App Store, let alone iCloud (a whole other can of worms I’d just rather leave alone) uses the UUID (which contains a hash of the globally unique serial number, WiFi MAC address and, if applicable, IMEI, IMSI, ICCID, MSISDN etc) to register the device with Apple. (see https://www.theiphonewiki.com/wiki/Activation [note, contrary to the article, it is no longer the case that devices can function in a limited fashion without activation; equally, non-cellular devices cannot be activated without some form of internet access], https://www.theiphonewiki.com/wiki/Activation_Token, http://www.freepatentsonline.com/20090061934.pdf).

This UUID registration is subsequently used as a unique identifier for Apple’s iOS-integrated Apple Push Notification System (APNS) (another privacy nightmare), FaceTime, Siri, iMessage and other Apple-hosted services. Apple may argue that they only receive hashes of these identifiers, but if you examine any proof of purchase for an iOS device, it will become clear that Apple links the credit card used at purchase, the purchaser's name and email, and of course, the serial number and all components required to generate a UUID. So, in essence, iOS devices (and also all IP addresses used with said device), unless purchased from a third party with cash, will always be linked to real identities, especially with cellular devices, since Apple will receive the IMSI/MSISDN every time a new SIM is inserted in the device.

This means, for example, that if you were to use a certain app for a social network under a pseudonym on an iOS device (not that I would recommend installing any social networking site’s apps on your device) and that service sends information via APNS, Apple (and possibly the social networking service) can most likely link the pseudonym account to your real identity.

If Apple took preserving user freedom to heart, they would, as is almost completely the case on macOS at present and for as long as the author can remember, allow users to operate devices completely autonomously.

On iOS, iMessage is enabled by default without prompting or informing the user. Hence, if you enter contacts into the address book, contacts’ details are hashed and automatically sent to Apple, supposedly to check for presence in Apple’s iMessage database to determine whether to show iMessage as an option on that contact’s page. This ultimately means that Apple has a massive social graphing capability baked in by default on all iDevices.

Both the iOS and Mac App Stores and iTunes Store uses the UUID as a way to implement DRM with the (not unlikely intentional) side-effect of compromising user privacy. This is easily observed on macOS, since the full filesystem is thankfully still visible, but the same mechanism is used for both OSes. To download “secure” apps from the (Mac) App Store, the device has to be de facto registered (sign in) with an Apple ID, which means the UUID is linked to the Apple ID’s details and the device’s purchase details linked to the UUID are linked to the Apple ID. On macOS, this UUID is then used with the Apple ID in the _MASReceipt/receipt file inside every application bundle downloaded, as a form of DRM (just try to remove your Mac’s WiFi card and rebooting - all Mac App Store apps will likely fail to open).

As a result of this, users are (without realising it) linking every download made using this Apple ID to their real identities. If this were only about ensuring the integrity and validity of downloaded files (what exactly is wrong with gpg signatures being used for verification?) If Apple really wanted the DRM aspect, (which the author is slightly uncomfortable about, but understands) gpg could still be used, and linked locally to a hardware identifier without having to share that privacy-sensitive identifier with Apple. The Apple ID could then be linked to a random identifier which has no link at all to the users’ real identity or privacy-sensitive identifiers or serial numbers.

The fact that Apple has chosen to only distribute its “free” macOS upgrades through the Mac App Store is particularly annoying, since it means that users are forced through this mechanism, “registering” their devices with Apple to access upgrades. Again, this could be securely and privacy-consciously implemented with downloadable DMGs accompanied by gpg signatures and checksums. Stranger still, but thankfully, Apple retains direct download links to security updates within a macOS version from the Apple Support pages, but doesn’t include downloadable signatures, let alone checksums. The main benefit of macOS, however, in this regard, remains that you can install applications from sources other than the app store (Developer ID verification adds some potential security, but can seriously compromise developer privacy), while avoiding Apple’s app store user tracking which could equally be used to send targeted malicious updates or applications to a target’s device.

iOS OTA updates again, subliminally, collect UUID metadata on every update check, meaning Apple is able to find out which exact device and likely person is checking for an update. You can download iOS updates directly from Apple and try to flash using iTunes, but again, iTunes will require network access to “check if the update file is genuine”. This will then collect the UUID of the device iTunes is running on, the UUID and other hardware identifiers of the iOS device (again, for unknown reasons) and then allow the update to proceed (see iphonewiki). It is perfectly possible to implement an equally secure process without having to place so much trust in, and send so much data to, Apple. Users should, if they wish, be able to become the root of trust without having to leak potentially sensitive data to Apple. On macOS you can separately download an update/upgrade DMG, which will be signed by Apple, and then simply install it without a network connection. Following the upgrade there is no “activation” request and you can continue to use the device autonomously from Apple.



Apple's UUID-based surveillance system



Another underappreciated issue with iOS activation beyond the issue of Apple collecting this data, is the fact that despite activation data being sent via an encrypted connection, Apple uses the global CDN, Akamai, to cache/host almost all of its services, including the activation server, which means, unless this researcher has missed something, all data Apple collects will potentially be cached worldwide by Akamai, including on Akamai’s global peering partners’ (ISP and IXPs) servers. This means that this raw data is potentially accessible to authorities or intelligence services that Apple will not usually cooperate with directly (not that this list is likely so large), and give access to more detailed information than Apple usually releases. It potentially also means that if an ISP or IXP’s servers are not configured as securely as Apple or even Akamai’s main servers, this data is at higher risk of being stolen or intercepted. Equally, given Akamai’s role in the CDN analytics business, it is possible that Akamai can link some of this interaction with Apple’s services to other Akamai-hosted services a given IP address interacts with.

China would seem like a perfect example of a country where this could be an issue. However, given that this researcher cannot verify how activation works on iOS devices sold in mainland China, that Apple made a deal to host their services with China Telecom in 2015 to fall in line with the Communist Party’s globally criticised national security laws, and that Apple’s (Akamai) activation servers (albert.apple.com) seem to be inaccessible in China, Apple has likely modified iOS on mainland Chinese devices. Then again, if you are a democracy activist in Hong Kong or another SAR, this should probably be of concern. Either way, this issue should be important whether you are in China or not. Akamai peers heavily with U.S., European, Asian and other IXPs and ISPs worldwide and, given the political atmosphere in many of these territories of late, this issue should at least be of interest.

This researcher fears, given recent industry interest in verified boot, the increasing appearance of various system binaries and PrivateFrameworks from iOS in macOS, increasing UI simplification of previously highly configurable parts of the OS, and Apple’s decision to rebrand OS X as macOS in line with iOS, and recent relatively high level dismissals and hires, that Apple plans to iOS-ify macOS or merge the two OSes at some point in the future. Apple should hold back on this, allow users to remain independent, and consider a TPM (despite pitfalls - it is better than nothing at all) measured boot process as opposed to moving to something like Intel’s Boot Guard Verified Boot, which to date only allows ODM/OEMs to receive signing keys. (Trammell Hudson’s HeadsOS would seem like some good inspiration - https://www.github.com/osresearch/HeadsOS; https://trmm.net/Heads) Apple’s users should, without having to identify themselves or their devices, be able to flash their own signing keys to a TPM, so if a user feels like removing/modifying certain Apple system binaries they are uncomfortable with, they can do so without compromising security, and simply flash a new measured boot hash.

Even with all of the obviously privacy-compromising options disabled in iOS’ settings, the devices still constantly contact Apple servers in the background for unknown or unnecessary reasons. Blocking Apple servers with network equipment firewalls makes keeping the OS and apps up to date impossible, meaning if privacy and its related subsets of security are to be ensured, users risk exposing themselves to exploits and vulnerabilities. This again highlights the central issue - Apple, not the end-user, has root of trust. Apple should not be so deeply integrated or locked into activities or software used on the device if they cannot clearly prove to end-users they can be trusted, and ought to simply offer users the option of using devices without forced post-purchase customer-client interaction. On macOS the sources of network requests can be identified and related launchd user agents or daemons unloaded and when not possible due to SIP, the specific binaries being loaded or their launch agents/daemons can be deleted/modified using the Recovery OS.

Apple’s incentives behind these design choices should not be ignored. Especially since iOS10, and the “differential privacy” (dprivacyd) concept, which Apple pushed, and which this author feels essentially boils down to “let’s collect even more data without giving a real reason and spin it as a privacy improvement because we remove certain metadata, after all none of our users understand or care anyway”, the chances that shareholders are pushing Apple to get into the ad and data collection business (have your cake and eat it?) should not be ignored.

This should be considered even more seriously, given recent “anti-terrorism” and “national security” laws and measures that nations worldwide, including China (especially given China is Apple's second largest market by sales), but also most European countries, have implemented. These (to varying degrees) force service providers and practically any technology company to collect as much data as possible about their users, force users to identify themselves, and hand it over to the government when requested. Apple’s reliance on China should be of concern to all users, especially given the swath of data Apple collects.

iOS has many good security decisions low down in the OS, but thanks to higher level OS design choices, these decisions leave the root of trust solidly in Apple’s hands, not the end-user’s. This would not be as major an issue if the OS was built with privacy in mind, and gave users more visibility and low-level choice, but this is simply not the case at the moment. This means that iOS, while a secure OS design, fails on overall security since it is practically impossible to prevent potentially sensitive data from being collected by Apple or leaking in some other way.

One would hope that Apple, especially since the FBI encryption debacle last year, would see the value in being open with its userbase (and refrain from dumbing important issues down), open sourcing as much as possible, allowing users to use devices in complete autonomy from Apple. Wherever possible, they should offer the option of moving the root of trust from themselves to the end-user, without requiring further information or a network connection. At the moment on iOS, this is impossible, and, as previously discussed, devices without iCloud, even without having ever been logged into an App Store account, essentially remain “cloud” devices that link to an Apple UUID database covering all data collected on a given device.

This post only covers a fraction of the sensitive data collected by iOS and the full extent of this data collection is unknown. The researcher of course recognizes that iOS is slightly better than Google’s semi-closed source Android in this regard (since on Android, activities are explicitly linked to a Google account via Play Services/Google Apps) and that iOS’ low-level OS design is substantially more secure than Android’s. Google, for now at least, however, still maintains AOSP which remains buildable and configurable in a privacy-conscious manner. However, Apple, as opposed to Google, often promotes itself as caring about its users’ privacy. This researcher would question that claim and hopes others will examine this further.

To those with the power to influence this (if you ever read this): Apple produces high quality, elegant hardware which is not being used to its full potential; please consider giving users the ability to manage their devices autonomously and securely without having to interact with you at all after purchase; leave the option of root of trust in users’ hands if they feel comfortable with it; move away from, and stop forcing users into, a walled garden where they are encouraged to rely on you for everything in their lives (financial services, entertainment, navigation, file storage, communication, etc.). Consider looking back to some of the values that used to be more popular on the internet in the 1990s and early 2000s, and at least offer users the option to avoid the Apple ID and UUID system. If examined closely, these end up resembling the Orwellian internet ID that many in the security, human rights and privacy communities have feared and rejected. At least, maintain macOS’ high user configurability and ability to be used autonomously as it is today. If you want to increase OS security, look to the Qubes project, and for firmware security, CoreBoot and/or Trammell Hudson’s HeadsOS for inspiration or ideas.

Oh, and Apple, why do you, in 2017, still use weak 3DES encryption to encrypt macOS Keychain passwords, and why not encrypt keychain metadata as well as passwords?




Update

Having read Matt Green’s recent blog (https://blog.cryptographyengineering.com/2017/03/05/secure-computing-for-journalists/) which suggests journalists use iOS, I still stand by my earlier criticism. He highlights one point, that iOS is designed more securely and maintained than desktop OSes, but does not look beyond this. The critical flaw in his reasoning is that this is all that makes up iOS. iOS, in some ways, is quite clearly more secure than a plain vanilla desktop OS. But to say that it can keep data safer than a hardened, locked down Mac with full filesystem and network control, or even a correctly configured Qubes setup with a corebooted laptop (like Trammell Hudson’s HeadsOS) would seem naive. iOS collects vast amounts of sensitive information, gives little information to users, has relatively deceptive UI design choices and ends up not giving end-users the option in a secure manner to independently become root of trust or configure their devices as they see fit.

Green (and especially some others in the security community) seem to have extended the meme “if you have the NSA in your threat model, you may as well give up” to “if you have Apple or the NSA in your threat model, you may as well give up”. Green’s logic, as some of the comments on the blog suggest, may be that journalists do not have the time or expertise to configure their devices securely or integrate and strictly enforce an operational security model necessary to use desktop OSes securely and that given that even if they were to gain such expertise, so few of their colleagues or others they likely would have to interact with would adopt a similar level of discipline, it would be of little use, and hence the best option would be to just go with the “easy choice” of using iOS. However, in this researcher’s humble opinion, in not placing sufficient emphasis on this, he in fact leaves readers, especially those who are likely worried about the issue, but with little expertise, with the TL;DR “If you use a desktop OS for your work, stop now, move your sensitive work to iOS exclusively and use Signal”.

In fact, given the way the majority of users use mobile OSes, i.e. installing a multitude of social media and messaging apps on their devices that in the background collect sensitive information or metadata (yes, even Signal does this…don’t even mention WhatsApp) creates many more risks. The fact that there is no way of monitoring or intercepting file system events, network connections and other system calls on said device and that you are giving apps many, many more privileges than you realise [1] means the “opsec” model necessary to actually prevent leaking sensitive information on iOS devices ends up being much, much more complex (call it practically impossible) than the one necessary on a suitably configured and hardened desktop operating system. If some of the issues raised in this researcher’s main article are resolved, Green’s advice may possibly be sensible, but otherwise it just seems risky.

Green seems also to have forgotten that Apple is a global company and configures devices subliminally (but significantly) differently in certain previously mentioned jurisdictions. So, if any investigative journalist trying to document democracy movements or nefarious activities of officials in said jurisdictions, hears of his advice, and decides to use iOS because it is recommended as “more secure for journalists” they may end up putting their sources, themselves, and the publication of their work at risk.

Given that more general news services will pick up the blog and most likely reduce it to “Use an iPhone if you want to be secure”, iOS will be given yet more of a reputation boost in the eyes of the general public. This makes it even less likely the actual level of sensitive information that leaks from these devices will become generally known and at the same time, gives Apple even more reasons to cite if they do eventually decide to merge the OSes or at least iOS-ify macOS.

Finally, If Green actually thinks handling sensitive material, e.g. the Snowden files, on iOS would be safe, he (or anybody else who wants to verify this) may reconsider this after doing the following:

Start by setting up a simple free open source firewall system on a spare server or using a VM; then, configure the system to block all DNS resolution for Apple servers and block Apple (and Akamai) IP ranges (may not be sufficient given Akamai peering with ISPs/IXPs) using the firewall; then connect a fresh, unconfigured iOS device (non-cellular is likely best since it means all data that is sent during activation will be sent through your firewall, whereas cellular models will send some data via the cellular connection - unless of course you happen to have a spare cellular base station lying around) to it via WiFi and try to activate it.

After realising that it is not possible to proceed, unblock Apple/Akamai servers again and then actively monitor DNS requests and network connections (many open source tools exist for this) (and pcap the entire session) during activation.

Once complete, stop and save the packet capture (the capture should speak for itself). Do not sign in with an AppleID or actively link the device to any accounts.

Then, disable the network connection, lock down the settings on the device, disabling as many services relying on Apple servers or that are potentially invasive as possible, going through every menu and nested submenu in “Settings” methodically.

Then once satisfied, start a new packet capture and once again monitor DNS requests/network connections, reenable the iOS device’s network connection and leave it on and connected and just watch the DNS resolutions and packets pass by.




(Slightly) Condensed Update (2018)

TL;DR - iOS-ification of macOS, the problem with coprocessors and "security"

  1. macOS increasingly at risk of being iOS-ified

  2. Introduction of iOS-based coprocessors on latest mac models seems to cement this concern

  3. iOS and iOS-based coprocessors force the regular sending of incredibly sensitive metadata to Apple for the mere ability to use the device for questionable and unknown reasons

  4. mac coprocessors actually dramatically increase power for a malicious Apple or adversary with Apple compliance

  5. Coprocessors are in highly privileged position on the board and very likely have network access independent of CPU

  6. Coprocessors while providing security in many ways still introduce a lot of potentially exploitable code to protect against vulnerabilities in a smaller, but legacy and highly exploitable embedded firmware codebase.

  7. End result is uncontrollable devices less suitable for sensitive environments with strict network and security requirements.

  8. Is "If you have NSA in your threat model, give up!" becoming "If you have the NSA, your device manufacturer/OS developer and all entities it cooperates with in your threat model, give up!" ??!!


Apple increasingly seems (despite vague claims to the contrary) increasingly interested in merging or "unifying" the two OSes, and there are constantly rumors of fundamental changes to macOS that make it far more like iOS than the macOS of old. Apple's introduction of ARM-based coprocessors running iOS/sepOS, first with the T1 processor on the TouchBar MacBook Pros (run the TouchBar, implement NFC/ApplePay, add biometric login using sep, and verify firmware integrity) and the iMac Pro's T2 (implements/verifies embedded device firmware, implements secure boot, etc) seems to cement this concern and basically renders using macOS devices without sending metadata to Apple difficult to impossible.

iOS devices have always required "activation" on first boot and when the battery has gone dead which initializes sepOS to proceed with verified boot. First boot activation not only initializes sepOS as discussed below, but sends metadata to Apple (and carriers via Apple with cellular devices) to activate the baseband and SIM. In activation processes after first boot, just as with first boot, a long list of highly sensitive metadata are sent hashed (note hashing does not give you any privacy from Apple here since they link this exact metadata to payment information at purchase) to Apple so it can return the personalized response required for secure boot to complete. What is particularly worrying about this process is that it is a network-linked secure boot process where centralized external servers have the power to dictate what the device should boot. Equally there are significant privacy concerns with devices constantly sending metadata (both during activation and other Apple-linked/-hosted activities) and linking IP addresses very strongly with real identities based on purchase payment information and if a cellular device, metadata collected about SIM, etc unless such connections are blocked at the network level (which is only possible on self-managed infrastructure, i.e. not cellular) and doing this basically renders using the device impossible since simply installing an application requires sending device metadata to Apple.

That the activation verification mechanism is designed specifically to rely on unique device identifiers that are associated with payment information at purchase and actively associated on a continuing basis by Apple for every Apple-hosted service that the device interacts with (Apple ID-based services, softwareupdate, iMessage, FaceTime, etc.) the ability (and invitation) for Apple to silently send targeted malicious updates to devices matching specific unique ID criteria is a valid concern, and something that should not be dismissed as unlikely, especially given Apple's full compliance with recently implemented Chinese (and other authoritarian and "non-authoritarian" countries') national security laws.

iOS has from the start been designed with very little end-user control with no way for end-users to configure devices according to their wishes while maintaining security and relies heavily on new, closed source code. While macOS has for most of its history been designed on the surface in a similar fashion, power and enterprise users can (for the moment) still configure their devices relatively securely while maintaining basically zero network interaction with Apple and with the installation of third party software/kernel extensions, completely control the network stack and intercept filesystem events on a per-process basis. macOS, despite having a good deal of closed source code, was designed at a very different period in Apple's history and was designed more in line with open source standards, and designed to be configurable and controllable by enterprise/power users.

The introduction of these coprocessors to mac devices, while increasing security in many ways, brings with it all the issues with iOS discussed above, and means that running mac devices securely with complete user control, and without forced network interaction with the Apple mothership in highly sensitive corporate and other environments problematic and risky. Given this author is unaware of the exact hardware configuration of the coprocessors, the following may be inaccurate. However, given the low-level nature of these coprocessors, it would not surprise the author if these coprocessors, if not already, will eventually have separate network access of their own, independent of the Intel CPU (indications suggest not currently the case for T1; unclear on T2), which leads to concerns similar to those that many have raised around Intel ME/AMT (and of course mac devices also have ME in the Intel CPU...). One could argue that these coprocessors increase security, and in many ways that is the case, but not the user's security against a malicious Apple.

The lack of configurability is the key issue. Apple could have introduced secure boot and firmware protection without making it require network access, without making verification linked to device-unique IDs and without introducing an enormous amount of potentially exploitable code to protect against a much smaller, but highly exploitable codebase, while running on a coprocessor with a highly privileged position on the board which gives immense power to an adversary with manufacturer compliance for targeted attacks.

This is an ongoing concern and in the worst case scenario could potentially represent the end of macs as independent, end-user controllable and relatively secure systems appropriate for sensitive environments with strict network and security policies.


[1] e.g. accessing FaceBook using a hardened, properly configured fully open source browser gives FaceBook far less privileges than installing FaceBook-developed apps on a mobile device because FaceBook was not directly involved with the development of the browser, which means no data/sensitive information is extractable beyond what is accessible using javascript in the browser (which can be restricted) without using an exploit - this links more broadly to the contemporary issue of the slow death of web standards as highly integrated, closed source (or partly open source with proprietary tracking elements or dependencies) mobile-only services grow in popularity and receive heavy investment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment