Deepfakes: What are they, and why are they dangerous?, 0921 SCBJ, SC Lawyer, September 2021, #34

AuthorBy Rachael L. Anna
PositionVol. 33 Issue 2 Pg. 34

Deepfakes: What are they, and why are they dangerous?

No. Vol. 33 Issue 2 Pg. 34

South Carolina BAR Journal

September, 2021

By Rachael L. Anna

By now, most everyone has seen synthetic video reproductions of political figures (including Donald Trump, Barack Obama and Boris Johnson), business leaders, or celebrities. Deepfakes, a portmanteau of “deep learning” and “fake,” are fake images, video, audio, and text content that appear to be real. This technology can be used to make people appear to say or do things they never said or did, or to substitute people in existing videos. Over the last few years, deepfake technology has become increasingly sophisticated and at the same time more widely accessible. In fact, there are any number of apps a person can download for free or low-cost to utilize deepfake technology.

Many of the deepfakes are created today by leveraging artifcial intelligence such as machine learning or deep learning. One of the most recent advances in deepfakes has come from the use of generative adversarial networks (“GANs”), which consists of two competing neural network models called the “generator” and the “discriminator.” The generator takes samples of data – such as images or audio of a particular person – and tries to deceive the discriminator from distinguishing between the real and fake samples. The feedback from the discriminator enables the generator to improve in appearing or sounding more realistic and the process repeats itself. After a certain point, not even a trained eye, or ear, can detect the fake.

Deepfakes’ potential for future harm

Not that long ago, the ability to disseminate video, photographs or audio to large numbers of people was limited. Times have clearly changed. And while media manipulation is not a new phenomenon, modern technology and social media platforms allow for convincing deepfakes to be rapidly and easily disseminated to a global audience in a matter of minutes. There are countless legitimate uses f or deepfakes today, such as for art, entertainment and education. But deepfakes are also frequently used for harassment, intimidation and extortion against individuals and businesses. Indeed, many malicious deepfakes have been used for non-consensual pornography, primarily targeting women.

Deepfakes will likely migrate far beyond the pornography context in the next few years, with great potential for harm. At the beginning of 2021, the Federal Bureau of Investigation (FBI) recognized deep-fake technology as a new emerging danger and warned that malicious actors “almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”[1] In addition to the threats identified by the FBI, others worry deepfakes could undermine public trust through misinformation campaigns, influence political elections, jeopardize national security, interfere with the stock market, lead to corporate espionage, and so forth. A few of these risks are examined below.

1. Cybersecurity Threats

Scenario: Susan, Jack’s boss and the CEO of the company where he works, calls on Friday afternoon.

Susan says: “Jack, hi, this is Susan. I’m at the airport getting ready to take off for Bobby’s lacrosse tournament in New York this weekend. Listen, Globex just called and needs us to wire $107,000 for the next shipment. You’ll be receiving an email in just a few minutes. Can you please wire the funds?”

Jack responds: “Sure, yes, no problem. I can take care of that.” Susan: “Great, thanks. Hope you and Cary have a great weekend. See you next week.” Jack opens the email, clicks the link, and deposits the funds without a second thought.

Only one problem: the caller was not Susan. The attacker found a presentation Susan gave to a professional organization on You-Tube and used it to create synthetic audio to impersonate the tone, infection and idiosyncrasies of Susan’s voice in less than 10 minutes. A quick Google search identified an article talking about her family and mentioning her son Bobby. Bobby had a public Instagram account where he shared videos of his lacrosse skills, and on that Friday afternoon, posted a picture of himself at the airport with the caption: “Headed to NY for Regionals with the fam.” From any number of publicly available materials, the attacker discovered that Globex is a supplier for Susan’s company. And what about Jack? His bio as the CFO is listed on the company website, where he mentions he enjoys hiking with his wife, Cary, and their two children. The attacker was able to acquire all this information in less than 30 minutes.

The use of synthetic content to carry out cyberattacks against organizations is referred to as Business Identity Compromise (BIC). Criminals are investing in deepfake technology, which has the potential to alter the cyber threat landscape. BIC will involve the use of content generation and manipulation methods to create synthetic corporate identities or a sophisticated replication of a current employee. The FBI warns that “[t]his emerging attack vector will likely have very significant financial and reputational impacts to victim businesses and organizations.”[2]

To create a deepfake, threat actors search for videos, speeches and social media posts on publicly accessible websites to gather the information they need. They may conduct months-long surveillance on the victim before committing the attack. The deepfake ploys would basically be advanced forms of phishing, but considerably more difficult to detect.

Criminals have already used, or attempted to use, synthetic audio in the commission of illicit activities, including blackmail and social engineering, as evidenced by publicly available examples of their success. A cybersecurity company reported earlier this year that it had...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT