• Skip to main content
  • Skip to secondary menu
  • Skip to footer

OSINT.org

Intelligence Matters

  • Sponsored Post
    • Make a Contribution
  • Market Intelligence
    • Technologies
    • Events
  • Domain Intelligence
  • About
    • GDPR
  • Contact

Deconstructing Deepfakes: How do they work and what are the risks?

October 21, 2020 By admin Leave a Comment

Last month, Microsoft introduced a new deepfake detection tool. Weeks ago, Intel launched another. As more and more companies follow suit and more concerns arise about the use of this technology, we take a look in today’s WatchBlog at how this technology works and the policy questions it raises.

What is a deepfake?
A deepfake is a video, photo, or audio recording that seems real but has been manipulated using artificial intelligence (AI). The underlying technology can replace faces, manipulate facial expressions, synthesize faces, and synthesize speech. These tools are used most often to depict people saying or doing something they never said or did.

How do deepfakes work?
Deepfake videos commonly swap faces or manipulate facial expressions. The image below illustrates how this is done. In face swapping, the face on the left is placed on another person’s body. In facial manipulation, the expressions of the face on the left are imitated by the face on the right.

Face Swapping and Facial Manipulation
Deepfakes rely on artificial neural networks, which are computer systems that recognize patterns in data. Developing a deepfake photo or video typically involves feeding hundreds or thousands of images into the artificial neural network, “training” it to identify and reconstruct patterns—usually faces.

How can you spot a deepfake?
The figure below illustrates some of the ways you can identify a deepfake from the real thing. To learn more about how to identify a deepfake, and to learn about the underlying technology used, check out GAO recent Spotlight on this technology.

What are the benefits of these tools?
Voices and likenesses developed using deepfake technology can be used in movies to achieve a creative effect or maintain a cohesive story when the entertainers themselves are not available. For example, in the latest Star Wars movies, this technology was used to replace characters who had died or to show characters as they appeared in their youth. Retailers have also used this technology to allow customers to try on clothing virtually.

What risks do they pose?
In spite of such benign and legitimate applications like films and commerce, deepfakes are more commonly used for exploitation. Some studies have shown that much of deepfake content online is pornographic, and deepfake pornography disproportionately victimizes women.

There is also concern about potential growth in the use of deepfakes for disinformation. Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare. They could also lead to disregard of legitimate evidence of wrongdoing and, more generally, undermine public trust.

What can be done to protect people?
As discussed above, researchers and internet companies, such as Microsoft and Intel, have experimented with several methods to detect deepfakes. These methods typically use AI to analyze videos for digital artifacts or details that deepfakes fail to imitate realistically, such as blinking or facial tics. But even with these interventions by tech companies, there are a number of policy questions about deepfakes that still need to be answered. For example:

What can be done to educate the public about deepfakes to protect them and help them identify real from fake?
What rights do individuals have to privacy when it comes to the use of deepfake technology?
What First Amendment protections do creators of deepfake videos, photos, and more have?
Deepfakes are powerful tools that can be used for exploitation and disinformation. With advances making them more difficult to detect, these technologies require a deeper look.

Sources:
SCIENCE & TECH SPOTLIGHT: DEEPFAKES
Deepfake Technology, Market Analysis

Filed Under: Workflow

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Footer

Recent Posts

  • Longeye Raises $5M to Bring AI-Powered Investigations to Law Enforcement
  • Jared Kushner’s Bid for Electronic Arts: Soft Power, FIFA Politics, and the Israel Question
  • U.S. Preparations to Overthrow the Maduro Regime
  • Qatar Buys Influence Through AI Infrastructure: QIA–Blue Owl $3B Data Center Deal
  • Israel’s Strategic Position Beyond Public Opinion
  • Poland’s Calculated Bet: Bolstering Ukraine’s Long-Range Strike Capabilities
  • Is the U.S. Actually Planning an Invasion or Coup in Venezuela?
  • Tadaweb Secures $20M to Expand Human-Centric OSINT Platform
  • The Collapse of Assad’s Regime: The Beginning of the End for Iran’s So-Called Axis of Resistance
  • Cognyte Intelligence Summit 2024: Transforming Global Security with AI-Powered Insights

Media Partners

  • Analysis.org
  • Opinion.org
When Professions Meet Automation: A Map of Displacement, Adaptation, and Renewal
Electronic Arts’ Historic Buyout Buzz Sparks Market Frenzy
Intel Surges on Apple Investment Speculation and Nvidia Partnership Momentum
Accenture’s AI Paradox: Growth in Bookings, Decline in Stock
NVIDIA: From Chipmaker to AI Infrastructure Powerhouse
H-1B Visa Uncertainty Casts a Shadow on Nasdaq Tech Stocks
Why Nvidia’s $5 Billion Bet on Intel Could Only Happen Under Washington’s Shadow
AMD Slides in Premarket as Intel Soars on $5 Billion Nvidia Investment
Monday.com’s AI Push at Elevate 2025: Convincing the Market It’s Ahead in the Race
China’s War on U.S. AI Champions: Nvidia and Broadcom in the Crosshairs
Spain’s Boom Is a Corruption-Fueled Illusion
Europe to Erdogan: Don’t Teach Us How to Eat
Europe’s Imported Illusion: He must be an engineer
Erdogan’s Possible Collapse
Iran’s Defeat: From Ring of Fire to Ring of Ruin
Snapback Sanctions Drive Iran Toward Stagflation and Unrest
Gustavo Petro’s Reckless Anti-American, Anti-Israeli Stunt
Snapback Sanctions: Cornering Iran’s Paper Tiger Regime
Sarkozy Sentenced, But Macron Is the Question
China’s Hidden War: From Ukraine to the Baltic

Media Partners

  • Market Analysis
  • Market Research Media
How Huawei Surpassed U.S. and European Rivals in Wi-Fi, Chips, and Routers
China’s Ban on BHP Iron Ore Imports: Strategic Leverage or Economic Miscalculation?
Chips, Tariffs, and Sovereignty: The Three-Front Trade War
AI Super-Cycle And The Tug-Of-War For 2026 Margins
Nvidia, OpenAI, and the AI Bubble Debate
Looking for the Next Nvidia? Try Nvidia
How Cisco Lost the Edge to Huawei: A Geopolitical, Organizational, and Industrial Anatomy of a Power Transition
Humanoids at the Gate: Leaders, Market Dynamics, and the Price That Unlocks Scale
Nuclear Renaissance: Why Investors Are Turning Toward Atomic Energy Stocks
Huawei vs. Nvidia in AI Silicon: Closing the Gap Through Scale, Still Trailing in Efficiency and Ecosystem
The End of the Traffic Economy? What’s Next for Small E-Commerce
Adobe’s Missed Turn: Why Not Buying Wix or Weebly Left a Gap
A 100% Tariff on Foreign Films: A Self-Inflicted Wound
China’s Nvidia Probe Is a TikTok Hostage Situation
Mistral AI: Europe’s Rising $14 Billion AI Powerhouse
Motion Raises $60M to Build the AI-Native Work Suite for SMBs
The Afterlife of Print and the Coming AI Storm for Digital Editions
AI Delivers the Death Blow to Low-Quality Indian Outsourcing
AI and the Fragile Social Contract
Warner Bros. Discovery: Restructuring for a Comeback?

Copyright © 2022 OSINT.org

Technologies, Market Analysis & Market Research and Exclusive Domains