The important thing about updates to an operating system (anyone, not just iOS) is not so much the new features or access to new features that are incorporated. What is really important is the security and bug fixes that can endanger our systems in a way that nobody is safe.
Today we are going to try to explain to you a succession of past and solved errors months ago on iOS, which the Google Project Zero team has made public and that show us how important security research is and to apply updates that correct system errors.
The first thing we have to understand at the security level is the difference of the different possible threats that our systems may have:
- Viruses: programs that run without our consent, hidden in execution and spread without our knowledge by different means. Only a program that looks for its chain will be able to find it. These do not exist in macOS and Linux because their architecture prevents them from existing.
- Trojans: programs that we execute believing that they will do something else but they contain a malicious function. As a fake Flash installer that puts a RAT (remote administration tool) to take control of our equipment without our knowledge. In macOS and Linux they exist (well yes). In iOS they exist and not: it is the App Store review that prevents them from reaching the system, but if a developer manages to strain one without the review team detecting it, there will be. However, the execution of apps on iOS prevents them from accessing the system, but yes (with our prior consent) they could access and use our data and steal it. They are very isolated cases, but we cannot categorically deny that they do not exist.
- Exploits: or security failures or zero-days. Actually the exploit would be the ability to exploit a security breach. Of these absolutely no one gets rid. They are software failures, failures that a programmer has made when creating the code of a program or system, and that can be exploited so that a malicious code can do things that it should not do by how the security of a system is defined.
Serious failures that allow through programming errors, exploit and obtain privileges in the system that should not have and take advantage of them to “do wrong”. Privileges such as accessing the general file system in iOS, exiting the apps sandbox or disabling code signing verification.
Security flaws in iOS 12.1.4
Project Zero, a division of Google, is responsible in recent years for finding countless security flaws in both own and third-party products. They found the vulnerabilities of the Intel chips (and other companies) known as Specter and Meltdown and many other big mistakes in both Android, iOS, Windows … they do a very professional job and always respecting the privacy of those affected, informing without publicity and bringing to light their discoveries when they have already been conveniently settled.
On this occasion, the Google Project Zero team has brought to light the discoveries about several bugs in iOS that allow us to clearly see how the issue of security affects everyone equally and must be taken very seriously.
The TAG (Google threat analysis group) team detected earlier this year a series of malicious web pages that were exploiting a series of security holes in iOS. Some of them, zero-days or day 0, because they had not yet been discovered by the team responsible for the software.
Simply by visiting a specific web page, thousands of iPhones each week were attacked and remote monitoring software was installed.
Working with the threat team, Google Project Zero found a total of 14 vulnerabilities in the system software through a chain of 5 different exploits. Seven of the errors are from the Safari browser, five from the kernel or kernel of the system and two errors that allowed any app or process to skip the sandbox that protects the kernel so that the apps do not reach it and get full permission to modify and access this. That is, root or root access.
Specifically, it was detected that a chain of exploits using the web, in the latest versions of iOS, were still not patched and were not known to Apple, so on February 1 Cupertino was notified. The failures in question, which Apple reported in this security report are errors of elevation of execution permits.
- CVE-2019-7286: An app could get elevated privileges through a corruption of memory, for a data entry not validated correctly.
- CVE-2019-7287: The same memory corruption could affect the system’s input and output management library, and allow that app that has achieved elevated privileges to execute arbitrary code with kernel permissions (of system owner).
And we are going to get very serious with this, because this elevation of permissions allows you to permanently install and run a monitoring software on our device. Software that allows you to extract all your data, record calls and send these recordings to remote servers (or hear them in real time), get the exact location, activate the cameras and get video or photo of what’s in front of them, activate and listen to the microphones and what happens around the device obtaining recordings … all without the user being at all aware of this except for a curious unusual expense of the battery in your device.
String of exploits
The first of the exploits, according to Google research, is from the time of iOS 10 and allows you to basically jailbreak the device. Google, in an extensive article that you can read here gives us the exhaustive detail on how to exploit this vulnerability with all the technical detail. In the end, the attack on the input-output layer to the system achieved what we have said: circumvent the code signature verification that is verified by the process
amfid or Apple Mobile File Integrity Daemon. This process is responsible for each code to be executed in memory comes from a source signed digital by Apple and validated.
And what is the signature? It is to obtain a hash or verification data of a code and encrypt it for later verification. If I have a program, this is data. I calculate a hash or verification code that validates its authenticity. A single data extracted from a series of arithmetic operations with all its data and that at the moment that a single number of that data changes, we obtain a different hash.
I encrypt that data with the Apple certificate, with the part of the private key that only Apple has. A digital certificate always has two parts: the private part that allows encryption and the public one that only allows deciphering what the private part encrypted. When I have that hash, I encrypt it and put it next to the app. The system recalculates the hash of a code to be executed, the encrypted value is previously decrypted and if the hash calculated and saved matches, the signature is correct and it is authenticated that the code was not modified in any way since Apple encrypted that check. That is the essence.
But if I get over that process, what I get is that I can execute any code from any source without any restriction (what we know as a jailbreak ) in addition to being able to access any part of the system without restrictions and its memory. Something extremely dangerous. That is why when people talk freely about jailbreak and how wonderful it is, they do not realize the security problem that is so important to us and how with the simple visit of a website or an email they send us or an app that we receive Let’s go down that nobody has previously supervised, we can open our device to anyone who wants to monitor it and / or extract all its information for any unethical purpose.
Another of the exploits was made to attack systems from iOS 10.3 and was patched in iOS 11.2 by Apple. You can read the information here. This allowed to read and write the kernel memory of the system (of the kernel). Normally this memory is protected by tables that use the KASLR or Kernel Address Space Layout Randomization technique, a method by which the memory that is reserved by the core operations is in random areas of the core, never in the same place, and therefore, it is more difficult for an attacker to deduce which address will be such or other data that results from the execution of a kernel process. Accessing this memory is one more step to get to know where the process of
amfid skipping the system signature check is.
The next and third, whose information is here, allows the system to deceive to place a code in its temporary folder / tmp, and get execution privileges when any content in this route should not have them per system. Basically, it accesses the kernel trust database cache (there the fault) and replaces the credentials of a allowed process with those of this code placed in the temporary folder. And this deceives the system by allowing the app that has been loaded from the web and should not be able to run (but it has been executed because we no longer verify the signature), escape from the sandbox of the apps.
The fourth is the way in which the KASLR table is accessed and thus be able to determine where the system will store the important information of the processes that are running in the kernel and be able to change it to divert the flow to a malicious code. The information is here.
And the last one, is a security flaw found at the same time by a member of the Google Project Zero team (which was credited by Apple in the iOS 12.1.3 notes where the bug was patched) and by the hacker @ S0rryMyBad that won a prize of $ 200,000 at the TianFu Cup PWN Security Conference. A failure that, again, due to an error in the validation of data in C, allowed an app to run outside the system sandbox. An error in the use of the certificate portfolio. The bug is explained here.
All these chain failures were basically designed for a visit of a device, with versions of iOS 10 to iOS 12, with unpatched versions, to find a way to exploit them. By taking advantage of one or more of the 14 security errors.
Security, something to be taken very seriously
As you can see, nobody is sure. It would be absurd to think that iOS is a bad operating system for everything that can be achieved with failures such as those we have commented, despite all the effort Apple puts. As we have said, nobody is safe and using a device connected to the network exposes us to these errors with any system.
Any operating system, mobile or desktop, web server, app … any software is likely to have a vulnerability and that it is exploited. The effort of groups like Google Project Zero is commendable and worthy of applause because they help us to be a little more safe every day. But “the bad guys” continue to work and are very good at it. Today, an iOS zero-day can be quoted on the “black market” in millions of dollars, but the important thing is the ethics of not wanting to take advantage and that companies, such as Apple, offer rewards and recognition to those researchers who With much effort, they find those holes that no one should go through.
Can this be fixed one day? Can you make secure code? Every day the code is safer, but there will always be or can find a way to go over security. It is impossible. Making a perfect code is impossible because the code is made by a human being and the attacks are so specific and focused that there are no tools today capable of locating them.
Only expert research makes it possible for us to be somewhat more protected. That and update everything to the latest version. Because there are tens and hundreds of holes that are covered every year in all systems. Many who never reach public opinion. Do not underestimate the need to update for our security.