Making its online debut toward the end of 2017, the term deepfake is derived from the more commonly known words “deep learning” and “fake”. Perhaps you’ve heard this term popping up in online tech forums or articles featuring upcoming AI trends, but what does it actually mean? According to Techopedia, deepfake is:
“a term for videos and presentations enhanced by artificial intelligence and other modern technology to present falsified results.”
Often, this synthetic media is created via image processing that manipulates appearance, vocal patterns and movement to simulate another individual- commonly applied to celebrity or political speeches and presentations. It is powered by the latest deep learning method, otherwise known as generative adversarial networks (GANs).
Now take a second to think about what this could mean for the years to come. Creepy right?
When you think of conventional warfare what does that look like to you? Perhaps blurred images of explosions, military force, atomic weaponry and cyber hacks come to mind.
For many years, digital experts have been warning the public about the rise of cyber warfare. Although deepfake is not a traditional form of hacking, it can be seen as a more psychological hack- something that has the ability to take hold of our brains, and deceive our emotional intelligence. It has the capacity to interfere with our cognitive ability to recognize objects, scenes and people.
Deepfake content is growing at a rapid pace, taking the internet by storm over the past couple years. For example, in a period of just 9 months, beginning in early 2019, the number of deepfake videos online jumped from 7,964 to 14,678 (source).
Although the early deepfake videos were buggy and could easily be spotted as fraudulent, newer content is taking on more realistic tones than ever before. And this same content is only going to get more real.
“In January 2019, deep fakes were buggy and flickery. Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”
- Hany Farid, a UC Berkeley professor and deepfake expert.
From the darkest depths of the web, various sectors are at high risk. The scariest part? No one is immune to this digital weapon; we are all vulnerable to its harmful implications.
Just recently, the Brookings Institution shed light on the various social and political dangers of deepfake and its potential to cause harm now- and more likely in the years to come by:
“...distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
Simply put, deepfake possesses two major weapons:
More recently, Belgium released a deepfake video of its Prime Minister falsely linking the COVID-19 outbreak to climate change.
This video, among others, is just the beginning of the rise of fake news. As the risk is becoming more of a reality, many countries are taking precautions one step further in regard to national security, investing in deepfake detection technology.
Simple answer: yes.
In the most affected space, the News Industry, stakeholders are preparing for the battle via investing in advanced AI technology. Journalists are paying closer attention to separate fact from fiction, using their investigative backgrounds paired with tools to spot irregularities in audio, syntax (speech-to-text), visual cues i.e. microexpressions.
“Deepfake quality isn’t great... yet. But the paradigm is fast-moving and no one knows what the future holds. And then think of Quantum computing, also on the horizon. That’s actually way more dangerous. It is already breaking traditional cryptography algorithm (such as RSA-2048). You could break the strongest encrypted password with this amount of power. As experts in AI applied to media [here at Moments Lab], we are thinking about these future trends and threats, constantly."
Frederic Petitpont, Moments Lab (ex Newsbridge) Co-founder & CTO
So what is the secret detection weapon? How are journalists fighting this war on Fake News- and what does that look like?
First, it is important to note that deepfakes are much more complex than working with single images or simple face swaps- these are usually videos with moving parts such as sound, motion, facial expressions, etc. Hence the only way to truly detect a deepfake is by trusting the experts and the latest technology - pairing traditional journalism investigation with detection tools powered by AI using the multimodal approach (i.e. analyzing multiple facets such as face analysis, voice patterns, context analysis, etc.).
High-level, media entities are adopting or investing in technologies that use DNNs (deep neural networks) via Multimidal AI to counteranalyze deepfake content. This is done via machine learning methods (i.e. advanced regression and classification) that automatically extract primary and discriminative features which are then used to identify deepfakes.
As deepfake evolves into something more sophisticated in the upcoming years, it holds even greater potential for damage. That’s why it will take a series of advancements among newly developed AI algorithms to detect the slightest abnormalities in fake content.
Moments Lab (ex Newsbridge) is a cloud media hub platform for live & archived content.
Powered by Multimodal Indexing AI and a data driven indexing approach, Moments Lab provides unprecedented access to content by automatically detecting faces, objects, logos, written texts, audio transcripts and semantic context.
Whether it be for managing and accessing live recordings, clipping highlights, future friendly archiving, content retrieval or content showcasing and monetization - the solution allows for smart & efficient media asset management.
Today our platform is used by worldwide TV Channels, Press Agencies, Sports Rights Holders, Production Houses, Journalists, Editors and Archivists to boost their production workflow and media ROI.