Connect with us

Hi, what are you looking for?

Tech

Advances in AI Make Voice Cloning Easy, Raising Scam Risks and Need for Protective Measures

Advances in AI Make Voice Cloning Easy, Raising Scam Risks and Need for Protective Measures
Advances in AI Make Voice Cloning Easy, Raising Scam Risks and Need for Protective Measures

Advances in artificial intelligence have made it remarkably easy to clone human voices. Since around 2018, technology has enabled the replication of voices with increasing precision and speed. Recent developments have further enhanced these capabilities, allowing for highly accurate voice clones from even brief recordings.

One notable example comes from OpenAI, the organization behind ChatGPT. This year, OpenAI demonstrated a project that can replicate a voice using only a 15-second audio clip. Although this tool is not publicly available and is designed with security measures to prevent misuse, its existence highlights the growing sophistication of voice cloning technology.

Advances in AI Make Voice Cloning Easy, Raising Scam Risks and Need for Protective Measures

Advances in AI Make Voice Cloning Easy, Raising Scam Risks and Need for Protective Measures

In contrast, Eleven Labs provides a more accessible option for voice cloning. For a fee of just $6, users can clone a voice from a one-minute audio sample. While the results are not perfect, they are sufficient to deceive many people, showcasing the potential for misuse in everyday applications.

The risks of voice cloning are particularly evident in scams such as the grandparent scam. Scammers exploit this technology by impersonating family members in distressing situations, such as claiming to be in an accident or in legal trouble. This tactic often involves urging the victim to keep the conversation secret to avoid detection.

To guard against such scams, it’s crucial to establish preventive measures. One effective strategy is to set up a family code word to use in emergencies. If a call purporting to be from a family member requests money or urgent assistance, confirming the code word can help distinguish between genuine requests and fraudulent attempts.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Tech

Threads is experimenting with a new feature that allows users to set a 24-hour timer on their posts. After this period, the post and...

Tech

A team of international researchers has developed Live2Diff, an AI system that transforms live video streams into stylized content in near real-time. Named for...

Tech

Amazon Web Services (AWS) recently unveiled several innovations aimed at enhancing the development and deployment of generative AI applications, addressing concerns around accuracy and...

News

AU10TIX, an Israeli company that verifies IDs for clients like TikTok, X, and Uber, accidentally left important admin credentials exposed for over a year....