Apple has responded to Senator Al Franken's concerns over the privacy implications of its Face ID feature, which is set to debut on the iPhone X next month. In his letter to Tim Cook, Franken asked about customer security, third-party access to data (including requests by law enforcement), and whether the tech could recognize a diverse set of faces.
In its response, Apple indicates that it's already detailed the tech in a white paper and Knowledge Base article -- which provides answers to "all of the questions you raise". But, it also offers a recap of the feature regardless (a TL:DR, if you will). Apple reiterates that the chance of a random person unlocking your phone is one in a million (in comparison to one in 500,000 for Touch ID). And, it claims that after five unsuccessful scans, a passcode is required to access your iPhone.
More significantly, Apple provides a summary on how it stores Face ID biometrics, which gets to the heart of the privacy concerns. "Face ID data, including mathematical representations of your face, is encrypted and only available to the Secure Enclave. This data never leaves the device. It is not sent to Apple, nor is it included in device backups. Face images captured during normal unlock operations aren't saved, but are instead immediately discarded once the mathematical representation is calculated for comparison to the enrolled Face ID data."
On the topic of data-sharing, it writes: "Third-party apps can use system provided APIs to ask the user to authenticate using Face ID or a passcode, and apps that support Touch ID automatically support Face ID without any changes." It continues: "When using Face ID, the app is notified only as to whether the authentication was successful; it cannot access Face ID or the data associated with the enrolled face."
Interestingly, the company dodges the Senator's question about data requests from law enforcement. But, by indicating that data lives inside a "secure enclave" that it can't access, it's suggesting that it won't be able to handover info that it doesn't possess. It could also be holding back in light of its scrap with the Department of Justice last year, which saw it refuse to unlock an iPhone 5C owned by the San Bernardino shooters.
As Sen. Franken noted in his letter, Apple trained its Face ID neural network on a billion images. But, that's not to say the photographs were of a billion different faces. For its part, Apple claims it looked at a "representative group of people" -- although it's still silent about exact numbers. It adds: "We worked with participants from around the world to include a representative group of people accounting for gender, age, ethnicity and other factors. We augmented the studies as needed to provide a high degree of accuracy for a diverse range of users." Of course, we'll get to see how accurate Apple's tech is when the new iPhone makes its way into more hands next month.
For now, it seems the Senator is satisfied with the company's initial response, which he plans to extend into a conversation about data protection. You can read his full statement below:
"As the top Democrat on the Privacy Subcommittee, I strongly believe that all Americans have a fundamental right to privacy. All the time, we learn about and actually experience new technologies and innovations that, just a few years back, were difficult to even imagine. While these developments are often great for families, businesses, and our economy, they also raise important questions about how we protect what I believe are among the most pressing issues facing consumers: privacy and security. I appreciate Apple's willingness to engage with my office on these issues, and I'm glad to see the steps that the company has taken to address consumer privacy and security concerns. I plan to follow up with the Apple to find out more about how it plans to protect the data of customers who decide to use the latest generation of iPhone's facial recognition technology."
- This article originally appeared on Engadget.