Facebook doesn’t need any more bad press, but here we go again. According to a report by Bloomberg, the company paid hundreds of contractors to transcribe audio snippets from your conversations within its services – without making it sufficiently clear these clips were being recorded. Other tech giants have done similar, but that doesn’t make the practice any less problematic.
The employees interviewed by Bloomberg say they do not know where the audio was recorded or how it was obtained, only that they were supposed to transcribe them. These clips sometimes included “vulgar content.”
Facebook confirmed to Bloomberg it had stopped transcribing audio “more than a week ago,” following Apple and Google’s lead. The social network said contractors were verifying the performance of its AI transcription tools and that the conversations were anonymized. The clips came from users who “chose the option in Facebook’s Messenger app to have their voice chats transcribed,” according to the report.
This is presumably the “voice to text” option that you can enable after sending a voice clip in Messenger, a feature that was first introduced in 2015. Neither the Messenger app nor a support page on how to enable/disable the feature specifies that Facebook would be able to review these conversations. Though a support page notes the feature uses machine learning, the average person would not expect actual human beings to be listening in on their conversations. The page does note the feature is disabled in Messenger’s secret conversations, which are encrypted.
Facebook is far from the only company that’s listened in on users’ voice clips. Amazon, Apple, and Google have all done the same to improve their voice assistants, it was just a few days ago that they stopped or began to give users the option to opt-out. Facebook says it stopped listening to Messenger clips after Apple and Google had a change of heart.
You might be thinking, “if the data is anonymized, what’s the big deal?” The problem is these companies did not make it explicit enough that user conversations could be seen by actual people. There’s a big difference between a computer listening on a conversation and an actual human being.
Moreover, it seems these clipped are often not properly anonymized. According to a report by The Guardian in late July, Apple contractors analyzing Siri activations would often come across “drug deals, medical details, and people having sex.” These recordings were “accompanied by user data showing location, contact details, and app data,” said the whistleblower. This was most common with accidental Siri activations – one of the things contractors were testing for.
Earlier this year, Facebook announced it was planning to encrypt conversations across all of its services. That’s a step in the right direction, but even without encryption, you expect some degree of privacy in your conversations.
When Facebook changed its data-use policy last year to make it more intelligible, it made no direct mention of audio recordings being used. It only said it would collect “content, communications, and other information you provide” when using its apps. Most anyone reading this knows AI models are trained and supervised by real people, but the layperson has no idea when or how extensively their data can be used.
Though Facebook and others have made some progress, it’s generally only after extensive backlash. It’s time for tech giants to preemptively make it absolutely clear when a user’s privacy is compromised.
Source: TheNextWebRelated posts: