WhatsApp has been very proud of its privacy features, providing “end-to-end encryption” to ensure information safety. As explained in their security page, “End-to-end encryption ensures only you and the person you’re communicating with can read or listen to what is sent, and nobody in between, not even WhatsApp.” According to a report by ProPublica, this is not exactly true.
Facebook, the company that owns WhatsApp, is no stranger to privacy controversies. In July 2019, the Federal Trade Commission (FTC) fined Facebook 5 billion USD citing privacy violations. Now, WhatsApp is the one in hot water as ProPublica reports that moderators can see your messages if they have been reported to them.
Specifically, once content has been reported, WhatsApp moderators can see the last five messages in the thread, regardless of consent from the other party. You might ask, “What is the difference between that and submitting a screenshot of the chat to report it?” This is Facebook’s defense behind the controversy.
Facebook claims that this does not break end-to-end encryption as tapping the “report” button creates a new chat between the reporter and WhatsApp, and essentially just copy-pastes the content to them.
WhatsApp also uses AI to flag content, which is necessary at this point, as 400,000 reports have been made to child safety authorities just last year. It is almost impossible to keep up with the amount of inappropriate content on the app without a computer doing it. The problem comes when the moderators gets innocent pictures of a kid taking a bath by mistake. WhatsApp told ProPublica that they receive an inordinate amount of these images and that “a lot of the time, the artificial intelligence is not that intelligent.” Although, they do state that the AI cannot scan through all the messages due to the encryption.
In their terms of service, they say that when content is reported, WhatsApp “receives the most recent messages” from the conversation and “information on your recent interactions with the reported user.” This is very general and does not specify that they can also see users’ phone numbers, profile photos, status messages, and IP addresses.
The AI also finds inappropriate content by scanning through unencrypted data, like “the names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”
They are not afraid of sharing metadata with law enforcement, as recently they submitted proof that a government official was talking to a reporter at BuzzFeed, ultimately resulting in a 6 month prison sentence.
All of this contradicts WhatsApp’s public image, as they often are under the guise of protecting their users’ data. Earlier this year, the Indian government wanted to pass a law that allowed authorities to view suspects’ messages, and WhatsApp pushed back with this statement:
“Requiring messaging apps to ‘trace’ chats is the equivalent of asking us to keep a fingerprint of every single message sent on WhatsApp, which would break end-to-end encryption and fundamentally undermines people’s right to privacy.”
WhatsApp – Source: Reuters
So, what is the solution? I think WhatsApp should be more transparent of what moderators can see in various circumstances.
0 comments :
Post a Comment