Built a “Backdoor” on your iPhone using New Child Abuse detection tools Critics say Apple

by -
Photo: STR/AFP (Getty Images)

techno.rentetan.com – The new features could be a slippery slope for privacy advocates concerned.

There is no small controversy about Apple’s plan to implement new functionality to fight Child Sexual Abuse Material (CSAM) on its platforms.

The company is essentially trying to provide a pioneer to solve a problem that has blocked both law enforcement officials and technology companies in recent years, namely the large and ongoing crisis of CSAM proliferation on major Internet platforms. Just in 2018, tech companies reported that 45 million photos and videos constituted material for child sex abuse – a terrible number..

While this crisis is real, critics fear that the new features from Apple – that include algorithmic scanning of devices and messages for users – would constitute a breach of privacy, and that it could one day be re-energized in search of other materials than CSAM. This could pave the way for new forms of broad-based monitoring and serve as a possible solution for encrypted communications, which is one of the last and best expectations of privacy.

We should look at the specifics of the proposed changes in order to understand these concerns. Firstly, the company will develop a new tool in order to search for signs of child sex abuse, by uploading photos from Apple devices into iCloud. The new feature uses a “neural matching feature,” called NeuralHash, as described in a technical paper published by Apple to evaluate whether images on the user’s iPhone match known as “hash” or unique digital fingerprints of the CSAM.

This is done by comparing the images shared with iCloud with a large CSAM image database compiled by the National Missing and Exploited Children Center (NCMEC). If sufficient image is discovered, human operators are then flagged to check the NCMEC (who then presumably tip off the FBI).

Some people have been concerned that their phones might contain photos of their own children in a bath or running naked through or through a sprinkler. But you don’t need to worry about it, according to Apple. The company has emphasized that they don’t “learn anything about images which don’t match the well know CSAM databases,” so they don’t just rip your photo albums and look at whatever they want.

Whether [a child] receives or sends sexually explicit photos AppL will also be launching a new iMessage function designed to “warn children and their relatives.” The feature is specifically designed to warn children about sending or obtaining a picture, which is sexually explicit by the company algorithm. The kid is told that he is about to look at a sexual image and make sure it’s all right that he doesn’t see the photo (the incoming image remains blurred until the user consents to viewing it).

If a child of 13 years old is notified of the incident to send or receive a picture, the parent of the child is then informed of the incident.

All that is said is that news about these two updates — which will begin later this year with the release of iOS 15 and iPadOS 15 — has not been kindly met by advocates of civil liberties. The concerns may vary, but essentially, there are a number of privacy risks for critics concerned that new technology can be deployed.

Concerns about how encryption works, how it is protected and what the update does to essentially circumvent this protection are raised in iMessage updates.

Encryption protects the user’s message content by scrambling it into unreadable cryptographical signatures before it is sent and essentially overrides the point at which the message is intercepted because it is unreadable. However, the way the new feature of Apple is configured will scan for sexually explicit material for child accounts communication before a message is encrypted. Again, this does not mean that Apple can read text messages for a child, it only searches for what it considers to be inadequate images of its algorithm.

The precedent established by this shift is nevertheless potentially worrying. The Center for Democracy and Technology has made a statement on Thursday that the iMessage update aims to erode privacy from the end-to-end encryption of Apple: “A mechanism for Apple to scan iMessages images is not an alternative to a backdoor, but a backdoor,” the Center said. “Client-side scanning on one end of communication breaks down transmittal safety, which undermines its privacy by informing the third party (parent) about the content.

Similarly, there are privacy advocates who plan to scan for iCloud uploads. Jennifer Granick from the ACLU Speech, Privacy and Technology Project surveillance and cyber-security consulting firm, informed Gizmodo via email that she was concerned with the possible effects of the photo scans: “Apple has developed a system for broad monitoring of conversations and data held on our telephones, however altruistic its motives,” she says.

“To censor or to identify or report content that is not illegal dependent on what hashes the company decides or must be included in the matching database, the CSAM scaning capacity could be reconsidered. ” It is also subject to abuse by autocrats abroad, overzealous government officials at home or even the company itself for this and other reasons.”

Even Edward Snowden was gossiping in:

Obviously, Apple is not responsible here for its mission to combat CSAM, but rather for its tools—which are a slippery pitch in criticism. In a Thursday article, the private Electronic Frontier Foundation pointed out that scanning capabilities similar to those of Apple’s tools might eventually turn its algorithms into images or texts of other types — essentially to encrypted communications, private interactions and personal content. The EFF states:

All it would take to expand APPLE’s narrow backdoor is to expand the parameters of learning machines so that additional contents may be searched or the configuration flags can be tweaked to scan, not only children and anybody. It’s not a slippery slope; it’s a completely constructed system waiting only for external pressures to change the least.

These concerns become particularly German when it comes to the roll-out of features in other countries—with some criticism that corrupt foreign governments could abuse and subvert Apple’s tools. In reply to these concerns, Apple on Friday confirmed to MacRumors that it is planning on country by country to extend its features. If distribution in a country is considered, the outlet reports that they will make a legal assessment in advance.

Indian McKinney, Federal Affairs Director at EFF, raised another concern in a phone call with Gizmodo Friday: that both tools are not auditable means that it’s impossible to check independently that they work as they should.

“External groups like ours or anyone else — researchers — don’t have a way to look below the hood to see how good it’s, how accurate it is, how many false-positives there are,” she says.

“When the system has been rolled out and it is pushed onto telephones, who is saying they will not react to governmental pressure to begin including other things—terrorism, stuff that unflatteringly depict political leaders, various other things.” Referably, EFF has recently re-established, in its Thursday article, a database of the Global Internet Forum against Terrorism (GIFCT), a technology originally built to scan and haze child sexual abuse pictures which now enables online platforms to find and moderately/banned ‘terrorist’ content that focuses on violence and extremism. It has been reported by the EFF.

For all these reasons, Apple received an open letter from a group of privacy advocates and security experts requesting the company to review its new functions. The letter had more than 5,000 signatures by Sunday.

But it is not clear if any of this will affect the plans of the tech giant. Apple’s VP Sebastien Marineau-Mes software admitted that “some people have misunderstandings and more than some people are worried about the consequences,” but that the company would “continue to explain and detail the features to make the people understand what we have built.” In an internal company memo leaked Friday. In the meantime, NMCEC has sent a letter internally to Apple employees, which calls critics of the program “the crying voices for the minority.”