Featured Posts
2023 brought a lot of challenges in the cyber space - the trend of large data breaches continued and security was, and is on the minds of many. What does the future hold? We can't know for sure, but let's take a look at some of the topics that are on the minds of many CISOs, boards and security experts.
Challenges with phishing at scale using GenAI
The adoption of Generative AI brings new security and privacy challenges. Organisations should consider enterprise-grade solutions and ensure strict policies and settings to prevent unwanted data sharing. This is particularly important when using free or consumer-grade versions of these technologies.
Social engineering attacks, powered by Generative AI, are becoming more sophisticated. The use of tools like ChatGPT enables attackers to create more personalised and effective attacks. Organisations should focus on widespread awareness and education, along with implementing AI and zero-trust frameworks to combat these threats.
It’s been over a year since ChatGPT was released to the public, and it and the other AI tools that have followed have come a long way since then, improving at an alarmingly impressive rate. Language barriers have all but disappeared, and the “one attacker to a victim” in traditional phishing or vishing scams can now be scaled up, and deepfakes are getting too good to ignore as an impersonation issue.
AI enhanced impersonation and identity fraud
New easy to use technologies to create deepfakes and audio clones pose a significant threat in terms of identity fraud and impersonation. Use of this technology in the hands of a scammer, mean that corporate fraud impersonation is easier to achieve. A high-profile individual or part of the leadership team can be mimicked and used to convince staff to leak financial information, disclosure of sensitive information or even direct theft. Advanced detection of these methods is on it’s way, but in the meantime, staff education and training are imperative.
Social engineering attacks
GenAI's impact on social engineering extends beyond email-based phishing attacks. AI-driven chatbots presents new vulnerabilities, particularly in interactive platforms such as customer service environments, social media and SMS (Smishing). Impersonation over chat inside the organisation’s environment can lead to theft in the same way as a deepfake video or audio can. Vulnerable chat systems such as Teams, Slack, Messenger and Skype can be a vehicle for social engineering and phishing, the attacker using GenAI to mimic the language and style of the one being impersonated.
Cyber trends and predictions webinar
Join us on an in-depth discussion about how GenAI will affect phishing and the human factor in cyber security with a live webinar on Feb 20th 2024. Presented by Karina Mansfield and Damian Grace.