Türkiye'de Mostbet çok saygın ve popüler: en yüksek oranlarla spor bahisleri yapmayı, evden çıkmadan online casinoları oynamayı ve yüksek bonuslar almayı mümkün kılıyor.
Search for:
Polskie casino Mostbet to setki gier, zakłady sportowe z wysokimi kursami, gwarancja wygranej, wysokie bonusy dla każdego.
  • Home/
  • Technology/
  • Deepfakes are taking over TikTok – here’s how to recognize them

Deepfakes are taking over TikTok – here’s how to recognize them

One of the world’s most popular social media platforms, TikTok, now plays host to a steady stream of deepfake videos.

Deepfakes are videos in which a subject’s face or body has been digitally altered to make them look like someone else – usually a famous person.

a remarkable example is the @deeptomcriuse TikTok account, which has posted dozens of deepfake videos impersonating Tom Cruise and has attracted some 3.6 million followers.

Greetings, Humanoids

Sign up for our newsletter now for a weekly roundup of our favorite AI stories delivered to your inbox.

Deepfakes received a lot of media attention last year, with videos of Hollywood actor Tom Cruise going viral.

In another example, Meta CEO Mark Zuckerberg seems to confess to conspiratorial data sharing. More recently, there have been some crazy videos featuring actors like Robert Pattinson and Keanu Reeves.

While deepfakes are often used creatively or for fun, they are increasingly being used in disinformation campaigns, for identity fraud, and to discredit public figures and celebrities.

And while the technology needed to create them is advanced, it is becoming increasingly accessible, leaving detection software and regulations behind.

One thing is certain: deepfakes are here to stay. So what can we do about it?

Different roles

The manipulation of text, images and images has long been a foundation of interactivity. And deepfakes are no exception; they are the result of a deep-seated desire to participate in culture, storytelling, art and… remix.

The technology is widely used in digital art and satire. It offers more sophisticated (and cheaper) techniques for visual insertions, compared to green screens and computer-generated images.

Deepfake technology can also look authentic resurrections of deceased actors and historical reconstructions. They may even play a role in helping people grieve their deceased loved ones.

Comedian Jordan Peele voices a deepfake with former US President Barack Obama.

But they are also available for abuse

At the same time, deepfake technology is thought to pose several social problems, such as:

  • Deepfakes are used as “evidence” for other fake news and misinformation.
  • Deepfakes are used to discredit celebrities and others whose livelihood depends on sharing content while maintaining a reputation.
  • Difficulty providing verifiable imagery for political communications, health messages and election campaigns.
  • Human faces are used in deepfake pornography.

The last point is especially worrying. In 2019, deepfake detection software company Deeptrace found that 96% of its 14,000 were deepfakes pornographic in nature. Free apps like the now-defunct DeepNude 2.0 have been used to make clothed women appear naked in the footage, often for revenge porn and blackmail.

In Australia, deepfake apps have even allowed offenders to circumvent “revenge porn” laws – a problem that is expected to become more serious in the near future.

In addition, deepfakes are also used in identity fraud and scams, especially in the form of video messages from a trusted “colleague” or “family member” requesting a money transfer. A study found that identity fraud using digital manipulation cost US financial institutions 20 billion dollars in 2020].

A growing concern

The creators of deepfakes emphasize the amount of time and effort it takes to make these videos look realistic. Take Chris Ume, the visual effects and AI artist behind the @deeptomcruise TikTok account. When this account created headlines last year, Ume told The Verge “You can’t do that with the push of a button”.

But there is good evidence that deepfakes are easier to make. Researchers from the United Nations Global Pulse initiative have demonstrated how to realistically fake speeches in just 13 minutes.

As more deepfake apps are developed, we can expect less skilled people to increasingly produce authentic-looking deepfakes. Just think how much photo editing has exploded in the last ten years.

Legislation, regulation and detection software struggle to keep up with advances in deepfake technology.

In 2019, Facebook came in for criticism for failing to remove a manipulated video of US politician Nancy Pelosi after it fell short of the definition of a deepfake.

in 2020, Twitter banned sharing synthetic media that may mislead, confuse, or harm people (except where a label is attached). TikTok did the same. And YouTube banned deepfakes in connection with the 2020 US federal election.

But even if this is a well-intentioned policy, it’s unlikely platform moderators be able to respond to notifications quickly enough and remove deepfakes.

In Australia, lawyers at the NSW office of Ashurst have said existing copyright and defamation laws would not protect Australians from deepfakes.

And while efforts to develop laws have begun abroad, they are focused on political communication. For example California has made it illegal to post or distribute digitally manipulated content from a candidate during an election – but does not protect non-politicians or celebrities.

How to detect a deepfake

One of the best remedies against malicious deepfakes is for users to equip themselves with as many detection skills as possible.

Usually the first sign of a deepfake is that something will feel “off”. If so, take a closer look at the subject’s face and ask yourself:

  • Is the face too smooth or are there unusual cheekbone shadows?
  • Do the eyelid and mouth movements seem disjointed, forced, or otherwise unnatural?
  • Does the hair look fake? Current deepfake technology struggles to maintain the original appearance of hair (especially facial hair).

Context is also important:

  • Ask yourself what the figure is saying or doing. Are they disowning vaccines or appearing in a porn clip? Anything that seems strange or contrary to public knowledge will be relevant here.
  • Search online for keywords about the video, or the person in it, as many suspicious deepfakes have already been debunked.
  • Try to assess the reliability of the source – does it seem real? If you’re on a social media platform, has the poster’s account been verified?

Much of the above is basic digital literacy and requires common sense. Where common sense fails, there are some more in-depth ways to spot deepfakes. You can:

  • Look for keywords used in the video to see if there’s a public transcript of what’s being said – outlets often cover quotes from leading politicians and celebrities within 72 hours.
  • Take a screenshot of the video being played and do a Google reverse image search. This can reveal whether an original version of the video exists, which you can then compare with the dubious one.
  • Run suspicious videos with a “colleague” or “family member” directly through that person.

Finally, if you do manage to spot a deepfake, don’t keep it to yourself. Always press the report button.The conversation

Article by Rob Coverprofessor of digital communication, RMIT University

This article was republished from The conversation under a Creative Commons license. Read the original article.


Contents

Shreya has been with australiabusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider australiabusinessblog.com, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Leave A Comment

All fields marked with an asterisk (*) are required