According to the FBI, scammers and criminals are deploying deepfakes and stealing personally-identifying information during online job interviews for remote employment. These interviews take place in virtual environments.
The use of deepfakes, also known as synthetic audio, image, and video content made with AI or machine-learning technology, has been on the radar as a potential phishing concern for some years. Deepfakes are fakes that are created using artificial intelligence or machine learning.
IC3 of the FBI reports that it has noticed an increase in the number of complaints alleging the use of deepfakes and stolen personally-identifying information to apply for remote work roles, the majority of which are in the technology industry.
One industry in which there has been a significant push to maintain remote employment is the information technology industry, despite certain offices requesting that employees return to work.
The majority of the vacancies that have been reported to IC3 have been in the fields of information technology, programming, database administration, and software development.
FBI warns that “some of the alleged positions entail access to customer PII, financial data, business IT databases and/or proprietary information,” which highlights the risk that an organization has in the event that it hires a fraudulent applicant.
The FBI has stated that the use of voice deepfakes during online interviews with prospective job seekers has been the subject of the complaints in the cases that have been submitted to IC3. However, it also acknowledges that victims have observed visual inconsistencies.
“In these interviews, the actions and lip movement of the person being interviewed on camera do not perfectly match with the voice of the person speaking. This can be noticed in both the video and the audio of the interview. According to the FBI, ” it may happen that actions such as coughing, sneezing, or other aural actions do not align with what is being shown visually “according to the FBI.
The use of stolen personally identifiable information (PII) to apply for these remote roles has also been highlighted in complaints to IC3.
FBI states that “victims have complained that their identities have been used” and that “pre-employment background checks showed PII submitted by some of the applicants belonged to another individual.”
In March of 2021, the FBI warned that bad actors would probably utilize deepfakes for cyber and foreign influence operations in the following 12 to 18 months.
It was hypothesized that artificial content might be utilized as an extension of spearphishing and social engineering. It was a concern that the fraudsters responsible for business email compromise (BEC), which is the most expensive form of fraud that exists today, would transition into business identity compromise, which is when fraudsters create synthetic corporate personas or sophisticated emulations of existing employees.
In addition, the FBI said that visual signs such as distortions and irregularities in photos and video might point to the presence of synthetic content. Typical examples of visual inconsistencies that can be seen in the synthetic video include head and torso motions, as well as problems with synchronizing between the movements of the face and lips and the related sounds.
It is not a new concern for fraudulent assaults to be made on recruitment processes; nevertheless, the usage of deepfakes for the task is something that is novel. The US Department of State, the US Department of the Treasury, and the Federal Bureau of Investigation (FBI) in May warned US firms not to hire North Korean IT professionals mistakenly.
The agencies cautioned that these contractors weren’t normally engaged directly in hacking but were utilizing their access as sub-contracted developers within US and European corporations to enable the nation’s hacking activities.