Featured Posts
Vishing, or voice phishing, combines traditional deceptive practices with modern technology, using mobile and land-line phones to manipulate individuals into revealing confidential information. This method targets organisations by attempting to obtain sensitive company data, financial details, or employee information through the guise of legitimacy provided by a human voice.
Generative AI and voice and video cloning
Generative AI, which powers deepfakes and synthetic media, is advancing swiftly. Its ability to create realistic audio, video, and even people from scratch is getting better every day. While its potential for innovation and accessibility is immense, it also poses a significant threat in the form of synthetic impersonation and misinformation. For example, in the entertainment industry, there's a growing conversation about using AI to create virtual actors or voice-over artists, highlighted during discussions like the SAG (Screen Actors Guild) strike.

The real-world impact and scale
The rapid development of Generative AI technologies raises questions about their scale in the real world. In social media, we're already witnessing the use of these technologies for creating hyper-realistic profiles and content. However, the extent to which it's being used for criminal activities like vishing at a large scale remains a subject of ongoing investigation and discussion. Pre-recorded vishing calls have been used for some time, but they are generally easy to discern from a real person or message. The risk here is when this technology is able to copy a sympathetic or known voice, in real time, people won't be prepared.
Why leaders should be alert
The potential risk extends beyond financial losses to include breaches of trust and corporate integrity. For instance, an AI-generated impersonation of a senior executive could mislead employees into disclosing sensitive information to cyber criminals or competitors. One recent example of an entire C-suite being 'deepfaked' over a video call, duping an employee into releasing millions of dollars, shows the immediate risk.
Mechanics of Vishing and AI misuse
Trust exploitation through impersonation: Attackers often imitate authority figures to manipulate targets, now enhanced by AI's ability to mimic voices or visuals convincingly.
Leveraging advanced technology: The use of GenAI in scaled up voice messaging or calls, almost in real time means detection might slip by most victims.
Social engineering tactics: These attacks exploit human psychology, using urgency or authority to prompt action.
.png)
How can you protect your organisation?
Establish a “safe word” system
Set a unique verbal code or “safe word” for handling sensitive requests over the phone. Especially for transactions, password resets, or account changes, agreeing on a phrase known only to internal staff (or friends and family) can provide a quick and private way to verify authenticity. Remembering a word can be tough for some, so confirming a previous conversation or detail is also a great way to verify.
Raise awareness through training
Offer regular, role-relevant security awareness training to teach staff how to spot vishing and AI-assisted scams, particularly ones that sound too familiar or too good to be true.
Create a robust verification culture
Encourage a "trust but verify" approach. Staff should feel empowered (and required) to double-check unusual or sensitive requests via an alternate, known channel.
Use technology against itself
Leverage AI-based security tools that can detect anomalies in communication patterns or flag calls coming from spoofed numbers or unknown sources.
Formalise reporting protocols
Ensure there are clear, fast internal pathways for reporting suspicious calls or requests. Quick internal response can prevent major incidents.
For more information on courses about GenAI, Vishing and Phishing and our NEW Cyber Edu versions for teens, contact the phriendly team for a personalised demo.