Türkiye'de Mostbet çok saygın ve popüler: en yüksek oranlarla spor bahisleri yapmayı, evden çıkmadan online casinoları oynamayı ve yüksek bonuslar almayı mümkün kılıyor.
Search for:
Polskie casino Mostbet to setki gier, zakłady sportowe z wysokimi kursami, gwarancja wygranej, wysokie bonusy dla każdego.
  • Home/
  • Startups/
  • What happens when AI speech generation goes rogue – the dangers of ID ‘voiceprints’

What happens when AI speech generation goes rogue – the dangers of ID ‘voiceprints’

The rampant misuse of AI speech generation is causing fear and speculation across multiple industries as the rapidly evolving technology leads to increasing instances of impersonation, political deepfakes, and security disruption.

Five years after the now infamous PSA clip showing a deepfake of US President Barack Obama warning of the dangers of misinformation posed by burgeoning artificial intelligence technologies. AI technology has vastly improved in producing fraudulent images, voice and video content – and is widely accessible to anyone with a computer and a modest budget.

This year has seen widespread adoption of AI speech generation, a type of artificial intelligence used to create synthesized voices that sound like natural human speech.

“Voice synthesis isn’t new — think Wavenet and most recently Vall-E, Deep Voice — but what’s changed is access to the technology and ease of use,” said Nishan Mills, chief architect at Center for Data Analytics and Cognition, La Trobe -university.

“We’re seeing more widespread adoption by lay users,” says Mills.

One of the biggest social media trends this month, especially on TikTok, is AI-generated clips of prominent politicians like the US president Joe Biden and Donald Trump sharing unusual announcements about video games and pop culture.

The advent of public-focused AI tools has given way to numerous fake films of public figures in dubious circumstances – whether AI Biden signing an executive order on Minecraft’s brilliance, or Pope Francis with a fashionable Balenciaga jacket.

And while the “meme” culture around generative AI can be thanked for hours of laughable content, the technology has already been used for many nefarious applications.

Last month, images generated on AI program Midjourney fooled countless Twitter users Donald Trump was arrestedand right-wing commentator Jack Posobiec made a fairly convincing statement fake video of Biden declaring the return of US military draft in preparation for war.

In a meeting with science and technology advisers, Biden said it remains to be seen whether artificial intelligence is dangerous, but he urged tech companies to act responsibly.

“Tech companies, in my opinion, have a responsibility to make sure their products are safe before going public,” Biden said.

The US president also said that social media has already illustrated the damage that powerful technologies can do without proper safeguards.

AI music is going viral

Experts have long anticipated the risks of misinformation that AI-generated content could bring to politics and the media, but perhaps less expected is the technology’s recent impact in other industries, such as music.

This week, a song featuring AI-generated mock vocals by musicians Drake and The Weeknd went viral on streaming services, setting off serious alarm bells in the music industry.

Entitled “Heart on My Sleeve”, the fake Drake song was initially shared TikTok by an anonymous user named Ghostwriter977 before being uploaded to streaming services.

The song generated over 600,000 plays on Spotify and millions on TikTok before being removed by Universal Music Group (UMG) for copyright infringement.

While it remains unclear whether the track’s instrumental was produced by AI, “Heart on My Sleeve” features fully AI-synthesized vocals from Spotify’s most-streamed artist Drake and pop singer The Weeknd, complete with lyrics, rhymes and streams in time. .

UMG told Billboard magazine the viral AI posts “show why platforms have a fundamental legal and ethical responsibility to prevent their services from being used in ways that harm artists.”

“The training of generative AI using our artists’ music (which represents both a violation of our agreements and a violation of copyright law) and the availability of infringing content created with generative AI on DSPs raises the question of which side of history all stakeholders in the music ecosystem want to be with,” said a UMG spokesperson.

“The side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation,” they added.

Popular music critic Shawn Cee warned listeners that AI-generated music could evolve faster than regulations can keep up with.

“We are in the machine learning stage where it learns faster than it is regulated,” says Cee.

“It can go up 100% on Spotify… probably be there for a day or two, and it will drive the internet crazy.

“I find it incredibly weird and creepy that your image or likeness is being used in situations or scenarios that you never agreed to,” he said.

AI voices used to bypass Centrelink systems

In March, Guardian Australia journalist Nick Evershed said he accessed his own Centrelink self-service account using an AI-generated version of his voice, effectively exposing a serious security flaw in the voice identification system.

Amid growing concerns about the threat AI poses to voice authentication systems, Evershed’s research suggested that a clone of his own voice, combined with his customer reference number, was enough to gain access to his Centrelink self-service account.

Both Centrelink and the Australian tax office (ATO) facilitate the use of “voiceprints” as an authentication measure for callers trying to access their sensitive account information over the phone.

While the ATO suggests its voice authentication systems are advanced enough to analyze “up to 120 characteristics in your voice,” security experts have been getting more reports of AI cloned voices bypassing voice authentication systems in banks and other systems.

“Voice cloning, a relatively new technology that uses machine learning, is offered for free or for a small fee by a number of apps and websites, and a voice model can be created with just a handful of recordings of a person,” says Frith Tweedie. , principal consultant at privacy solutions consultancy Simply Privacy.

“These systems should be thoroughly tested prior to deployment and regularly monitored to detect problems.

“But it’s hard to keep up with innovative fraudsters with easy access to these kinds of vote cloning tools. That raises the question of whether they should be released at all,” she added.

Australia currently has no specific law regulating artificial intelligence.


Shreya has been with australiabusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider australiabusinessblog.com, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Leave A Comment

All fields marked with an asterisk (*) are required