Vishing, or voice phishing, combines traditional deceptive practices with modern technology, using mobile and land-line phones to manipulate individuals into revealing confidential information. This method targets organisations by attempting to obtain sensitive company data, financial details, or employee information through the guise of legitimacy provided by a human voice.

Generative AI  and voice and video cloning

Generative AI, which powers deepfakes and synthetic media, is advancing swiftly. Its ability to create realistic audio, video, and even people from scratch is getting better every day. While its potential for innovation and accessibility is immense, it also poses a significant threat in the form of synthetic impersonation and misinformation. For example, in the entertainment industry, there's a growing conversation about using AI to create virtual actors or voice-over artists, highlighted during discussions like the SAG (Screen Actors Guild) strike.

The real-world impact and scale  

The rapid development of Generative AI technologies raises questions about their scale in the real world. In social media, we're already witnessing the use of these technologies for creating hyper-realistic profiles and content. However, the extent to which it's being used for criminal activities like vishing at a large scale remains a subject of ongoing investigation and discussion. Pre-recorded vishing calls have been used for some time, but they are generally easy to discern from a real person or message. The risk here is when this technology is able to copy a sympathetic or known voice, in real time, people won't be prepared.

Why leaders should be alert

The potential risk extends beyond financial losses to include breaches of trust and corporate integrity. For instance, an AI-generated impersonation of a senior executive could mislead employees into disclosing sensitive information to cyber criminals or competitors. One recent example of an entire C-suite being 'deepfaked' over a video call, duping an employee into releasing millions of dollars, shows the immediate risk.

Mechanics of Vishing and AI misuse

Trust exploitation through impersonation: Attackers often imitate authority figures to manipulate targets, now enhanced by AI's ability to mimic voices or visuals convincingly.

Leveraging advanced technology: The use of GenAI in scaled up voice messaging or calls, almost in real time means detection might slip by most victims.

Social engineering tactics: These attacks exploit human psychology, using urgency or authority to prompt action.

How can you protect your organisation?

Fostering awareness and education: Regular security awareness training to help employees recognise and appropriately respond to vishing attempts and any possible AI-generated communications.

Robust verification processes: Implementing stringent protocols and access control for verifying the authenticity of unusual requests, particularly those involving finances or confidential data.

Advanced security infrastructure: Beat it at its own game by employing AI and machine learning for detecting anomalies and monitoring communication channels.

Policy implementation: Developing clear guidelines for handling sensitive requests and reporting suspicious activities.

For more information on courses about GenAI, Vishing and Phishing and our NEW Cyber Edu versions for teens, contact the phriendly team for a personalised demo.