Voice to Tweet Fuels New Fears for Deep Fakes and Cybercrime

by Redvers Curry

Jun 25, 2020

A new era of human experience is upon us. Advances in cloud computing, social applications, low cost bandwidth and the ubiquity of smart devices has ushered us into a much more connected world. This digital era comes with new responsibilities.  

Today, social media helps billions of people connect, communicate and collaborate in ways never imagined. Through these channels, we can influence for good or bad. The outcome of that influence can have instant real-world consequences.

As consumers move through this digitally connected world, their identities play a vital role in the quality of their experience. They want their identities to be known, remembered and effortlessly engrained into every digital interaction for the purpose of convenience – without fear of misuse. Unfortunately, across many social media platforms, we see a barrage of attacks targeting identities. Fake news, deep fakes, shallow fakes, misinformation and disinformation are all on the table. Deep fakes are on the rise driven by easier access to low cost and better-quality tools. It’s not just videos or photos that can now be manipulated. There are serious risks surrounding the advancements in audio with “voice to text” and Twitter’s new “voice to post” capabilities.

The concern with voice or audio advancements – like a voice Tweet – is that the feature takes your words, records them and broadcasts them publicly to millions of strangers. There are many examples of voice hacking technology and some videos even show the ways to manipulate mobile devices and home IoT devices as tools to gain access to homes, devices, hotel rooms, vehicles and more. 

For example: You’re visiting a friend’s house and see an Alexa device on the kitchen counter. Maybe you shout out a command: “Alexa, open the garage door!” Then you hear the garage door opening. The next day, you see your friend leaving in their car. You walk up to a window, pull out a megaphone and shout, “Alexa, open garage door” and it does. It’s an exaggerated example, but you get the point: the authentication methods verifying voice authenticity is flawed.

By introducing new innovations, companies need to also prioritize user education and security. Users need to be aware that anything that takes voice as an input and does something on your behalf can be potentially manipulated and exploited with different technologies. “Voice skins” or” voice clones” are digital assets which transform the voice of a speaker in real time, allowing anyone to “speak” as their chosen target. The possibilities are endless, especially as powerful machine algorithms make it easier to recognize voices among background noise. Today, your phone is designed to respond to your voice – such as in a crowded setting – even if the human ear can’t detect it. As with many technological advances happening around us, we cannot overlook the inherent risks.

Social media companies have an opportunity, and a risk. They can become the heroes of this story and advocate for security and privacy controls, or they can do nothing. Some of the immediate actions these companies can take:

  • Create a group of advisors, listen to them and publish (publicly) the takeaways
  • Publish a vision and mission statement
  • Publish a roadmap, even if it’s just the first step
  • Create a set of privacy principles
  • Build an ecosystem of security and privacy advocates and experts

In an era of digital innovation, social media companies don't have the luxury of turning a blind eye or playing victim. In the digital economy, data is the new currency and these companies are sitting on a treasure trove. 

Author: Redvers Curry

Category: RSA Point of View, Digital Transformation, Fraud, Social Media

Recommended for you