Technology

Big Brother from Apple: How the iPhone Manufacturer Reads Messages to Fight Crime

How does Apple intercept illegal content that ends up on the iCloud? Does correspondence read a living person or a robot? The answers to these questions were received by Forbes USA, which read the American security forces documents. Apple rarely lets the public in the details of its work. And this applies not only to the safety of iOS, but also to how the company operates when it encounters criminals, namely, how it studies their emails and instant messages.

Hearing something like this, each user might start to worry about one’s privacy on the internet. On the one hand, such manufacturer’s actions help to prevent crimes. On the other hand, they increase the risk of identity theft. That’s why someone may address IdentityIQ reviews to know more about the ways of protecting oneself from the prying eyes and malintents.

How Does Apple “Intercept” Emails?

Forbes USA has a search warrant that, for the first time, has been able to learn how the iPhone manufacturer intercepts and verifies messages containing illegal materials, such as child abuse scenes. A search warrant was issued in early February in Seattle, Washington.

To begin with, we need to clarify the situation: Apple does not manually check all our emails. Like most other big tech companies like Facebook or Google, Apple uses only electronic signatures to detect images containing scenes of child abuse.

These are digital image signatures that are attached to materials containing scenes of child abuse. When Apple systems (not employees) notice that images with such signatures pass through the company’s servers, they send a special signal. Emails or attachments potentially containing illegal images are passed to company employees for further review.

What Occurs Then?

When something like this happens, Apple employees have every reason to contact the appropriate official body. Typically, these are the National Center for Missing and Exploited Children (NCMEC). After receiving information about illegal content, NCMEC officers contact law enforcement agencies, which begin an investigation.

However, Apple employees are not limited to that. First of all, they stop sending letters containing such materials. Employees view the contents of emails and attachments and report if they notice images with suspected child pornography.

After that, an Apple employee examined each of the images with suspected child pornography. In this case, Apple provided tremendous assistance to the investigation by providing data on the user of the iCloud. It was about the name, address, and mobile number that the user indicated when registering. Also, law enforcement requested the content of the user’s emails and instant messages, as well as all files and other records stored in the iCloud.

Is There a Problem With Your Privacy Policy?

If Apple employees view users’emails only when the system detects images containing child abuse scenes, there will be no problem with the privacy policy.

Like all tech companies, Apple must balance the privacy and security of users’ data. It allows the company to search for images containing scenes of violence, but at the same time retains safeguards against abuse of the ability to view email. No difference as to how automated the process of finding illegal materials in the initial stage is. The final check should be carried out by a person.

Still, Woodward hopes law enforcement won’t ask Apple to look for other materials as well. It immediately makes you wonder whether the system can be abused by issuing warrants to search for completely different images.

What About Messages Protected by End-to-End Encryption?

The real battlefield is encrypted messages. Apple systems cannot mark illegal content in messages that have been encrypted with end-to-end encryption keys, because only users have access to these keys.

Recently, so-called encryption wars around the world have been discussed, in which government authorities ask representatives of technology companies to help them break into their security systems and provide user data access. The FBI recently sent a letter to Apple demanding that it should help to unlock two iPhones of the alleged terrorist who opened fire at a U.S. Navy base in Pensacola in December 2019.

The FBI wanted to get encrypted data from the shooter’s phones to look for possible leads but did not explain why they failed to unlock them using a GrayKey device that intelligence agencies use for such purposes. Such devices have long been able to access older iPhone models, such as iPhone 5 and iPhone 7. That’s the kind of phone the shooter from Pensacola had. A senior FBI official believes that the bureau’s demand could harm Apple’s relationship with the intelligence agency.

Back to top button
Close