Gone are the days of dropping off a resume to your local branch or office – recruitment has almost exclusively gone online, making it easier than ever for businesses to connect with talent from around the world. However, this new convenience comes with a set of challenges, one of which is the risk of deepfakes or job outsourcing.
Deepfakes, powered by artificial intelligence (AI), are synthetic media in which a person's likeness is copied or created from scratch. In the context of HR or recruitment, imagine interviewing a candidate over a video call, only to find out later that the person you were speaking to was a deepfake, and the ‘person’ doing the job was almost exclusively outsourcing it to AI tools. This might sound like science fiction, but with easy accessibility of AI, it's a new possibility that recruitment and management professionals need to be aware of.
To computer programmers and those who know a little about automation, this idea isn’t new. Many people for years have either automated part of their jobs for efficiency or outsourced tasks that they didn’t otherwise know how to do, or have time for. What is the ethical stance on this practice at your organisation?
It’s highly likely that AI is already being used at your organisation. Even simple SaaS platforms, for example Google search has used AI in some way for its algorithms for many years. With more user accessible AI tools, employees need to be aware of the potential risks associated with AI, such as privacy concerns, and the misuse of AI technologies. Companies should provide training and guidelines to help employees use AI tools responsibly and safely.
Simple guidelines such as:
Use of AI tools such as GPT-4 for first drafts, inspiration and thought starters are generally fine (check with your organisation) but it bears keeping in mind that they should not be relied upon as ‘facts’ due to the hallucinatory elements of some of the outputs. Find your own citations before writing up your final draft.
Business Email Compromise (BEC) and Vendor Email Compromise (VEC) are two significant threats that organisations need to be aware of. BEC is a type of scam where an attacker impersonates a senior employee from within the organisation and attempts to trick an employee or customer into transferring funds or sensitive information. VEC is similar, but the attacker impersonates a vendor or supplier.
AI can make these scams more convincing. For example, AI can be used to create deepfake audio or video that makes the impersonation more believable. This is a significant risk, especially for organisations that have many employees working remotely or from home and relying on digital communication.
While the risks associated with deepfakes and AI are real, there are steps that organisations can take to protect themselves.
While deepfakes and AI present new challenges for HR and management, these can be mitigated with the right knowledge and tools. By staying informed and proactive, professionals can navigate these challenges and continue to foster a safe and productive work environment.