Türkiye'de Mostbet çok saygın ve popüler: en yüksek oranlarla spor bahisleri yapmayı, evden çıkmadan online casinoları oynamayı ve yüksek bonuslar almayı mümkün kılıyor.
Search for:
Polskie casino Mostbet to setki gier, zakłady sportowe z wysokimi kursami, gwarancja wygranej, wysokie bonusy dla każdego.
  • Home/
  • Startups/
  • Generative AI is about to become a $23 trillion industry — not to mention the dark side of scams, deepfakes, and romance bots

Generative AI is about to become a $23 trillion industry — not to mention the dark side of scams, deepfakes, and romance bots

The generative AI industry will be worth approximately A$22 trillion by 2030, according to the CSIRO. These systems – of which ChatGPT is currently the best known – can write essays and code, generate music and illustrations, and have whole conversations. But what happens when they are turned into illegal use?

Last week, the streaming community was rocked by a cup which can be traced back to the misuse of generative AI. Popular Twitch streamer Atrioc has released a teary-eyed apology video after being caught watching pornography with other female streamers’ faces on it.

The ‘deepfake’ technology needed to photoshop a celebrity’s head onto a photo the body of a porn actor has been around for a while, but recent developments have made it much harder to detect.

And that’s the tip of the iceberg. In the wrong hands, generative AI can do untold damage. There’s a lot we could lose if the laws and regulations don’t keep up with us.


The same tools used to create deepfake porn videos can be used to fake a US president’s speech. Credit: Buzzfeed.

From controversy to outright crime

Generative AI app last month Lensa came under fire because it allows its system to create completely nude and hypersexualized images from users’ portrait photos. Controversially, it also whitened the skin of women of color and made their features more European.

The backlash was swift. But what is relatively overlooked is the huge potential of using artistic generative AI in scams. On the other end of the spectrum, there are reports that these tools can fake fingerprints and facial scans (the method most of us use to lock our phones).

Criminals are quickly finding new ways to use generative AI to enhance the fraud they already commit. The appeal of generative AI in scams stems from its ability to find large patterns amounts of data.

Cybersecurity has seen an increase in “bad bots”: malicious automated programs that mimic human behavior to commit crime. Generative AI will make these even more advanced and harder to detect.

Ever received a scam text from the “tax office” claiming that you are a waiting for refund? Or maybe you got a call saying there was a search warrant out for your arrest?

Such scams can use generative AI to improve the quality of the text messages or emails, making them much more believable. This is how we have seen AI systems emerge in recent years used to impersonate key figures in “voice spoofing” attacks.

Then there are romance scam, where criminals pose as romantic interests and ask their targets for money to help them out of financial distress. These scams are already widespread and often lucrative. Training AI on actual messages between intimate partners can create a scam chatbot that is indistinguishable of a human.

Generative AI could also enable cybercriminals to target vulnerable people more selectively. For example, training a system on information stolen from large companies, such as with the Optus or Medibank hacks last year, can help criminals become targets the elderly, people with disabilities or people who are in financial difficulties.

Furthermore, these systems can be used to improve computer codewhich some cybersecurity experts say will make creating malware and viruses easier and harder detectable by antivirus software.

The technology is there and we are not prepared

from Australia And New Zealand’s governments have published frameworks related to AI, but they are not binding rules. Both countries’ laws regarding privacy, transparency, and freedom from discrimination are not up to the task, in terms of the impact of AI. This puts us behind the rest of the world.

The US has made a law National Artificial Intelligence Initiative in force since 2021. And since 2019 it has been so illegal in California for a bot to interact with users for commercial or electoral purposes without revealing that it is not human.

The European Union is also well on its way to implementing the world’s first AI law. The AI ​​law bans certain types of AI programs that pose an “unacceptable risk”, such as those used by China social credit system – and imposes mandatory restrictions on high-risk systems.

Although I use ChatGPT break the law results in warnings that “planning or executing a serious crime can lead to serious legal consequences”, the fact is that these systems are not required to have a “moral code” programmed into it.

There may be no limit to what they can be asked to do, and criminals are likely to come up with solutions to rules designed to prevent illegal use. Governments should work closely with the cybersecurity industry to regulate generative AI without hindering innovation, such as through ethical considerations required for AI programs.

The Australian government should use the forthcoming one revision privacy law to get ahead of potential threats from generative AI to our online identities. Meanwhile, New Zealand’s privacy, human rights and ethics Frame is a positive step.

We also need to be more careful as a society to believe what we see online, and that people remember are traditionally bad to detect fraud.

Can you recognize scams?

As criminals add generative AI tools to their arsenal, detecting scams will only get more difficult. The classic tips will still apply. But that aside, we’ll learn a lot from reviewing the ways these tools fall short.

Generative AI is bad at it critical reasoning and conveying emotion. It can even be listened to give wrong answers. Knowing when and why this is happening can help us develop effective methods to catch cybercriminals using AI for extortion.

Tools are also being developed to detect AI output from tools like ChatGPT. These could go a long way in preventing AI-based cybercrime if they prove to be effective.

This article has been republished from The conversation under a Creative Commons license. Read the original article.


Contents

Shreya has been with australiabusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider australiabusinessblog.com, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Leave A Comment

All fields marked with an asterisk (*) are required