Tech giants Apple, Amazon, and Google have changed the world with their creation of “smart assistants”, summoned by phrases such as “Hey Siri”, “Alexa” and “OK Google”. But we also read horror stories of Alexa being activated and recording conversations without having been summoned. We don’t like the idea of robots listening to our private conversations, but what if it materialised that human analysts are doing just that?
A few months ago, it emerged that these companies, as well as Facebook, have humans listening to (and leaking) snippets of our voice recordings and messages. The purpose, they have all said, is to improve artificial intelligence voice recognition systems and enhance the customer experience.
Equally frightening is that the horror story referenced above turned out to be true, as some of the recordings that were analysed by humans had been recorded by smart assistants on occasions where the customers had not summoned the assistant using its “wake-up” phrase.
In accordance with GDPR, someone’s voice is considered their personal data, because individuals can be identified by the sounds of their voices. As an example, Scarlett Johansson’s voice was easily recognisable in the film “Her”. What’s more, aleatory recordings mean that, at some point, these machines may be recording customers discussing other sensitive and personal information, such as their bank details and their health, and other sensitive and personal information.
There have been some hurried statements published by the tech giants confirming that these recordings were anonymised before being handed over to the analysts. Whilst this is possible for data related to location or a device’s serial number, it may be difficult (if not impossible) to anonymise the sound of a voice, or potentially sensitive personal data discussed by the individuals in the recordings.
Having established the problem, we can begin to identify solutions. At a minimum, our voices should be distorted so as to avoid being identifiable. Alternatively, transcripts, instead of audio recordings, could be provided to the analysts. Another threat lies in devices recording the audio of external third parties. For example, although the owner of a smart assistant may have consented to recording his/her data, a bystander that happens to have a conversation with the owner of a smart assistant would not have consented to the recording of his/her data. It’s possible that the AI underpinning these machines should be trained to recognise the data that they are recording and to avoid recording unfamiliar voices. Or, perhaps we could require the machine to ask the user for explicit consent to record information on each occasion, as is required under GDPR where processing sensitive data.
The Hamburg data protection authority, a German privacy watchdog, ordered Google to cease this processing of personal data in order to “protect the rights and freedoms of data subjects” (Article 66 GDPR), a first in GDPR history. Note that Google had already suspended this activity in the EU. This processing also caught the eye of the Irish Data Protection Commission and the National Commission for Data Protection in Luxembourg, who are taking a closer look into this serious issue. Nonetheless, it seems that no official complaints or sanctions have been issued by the Member States of the European Union. With respect to the tech giants’ responses, Apple has issued an apology, and Amazon held that it will be adding an option to auto delete voice recordings to their new smart clocks, smart ovens, high-end speakers, earbuds, and glasses. What is certain for now is that all four companies have hit the brakes on their legally questionable use of users’ personal data.
Article written by Deborah Tastiel