Cybercrime , Fraud Management & Cybercrime , Governance & Risk Management

FBI: Deepfake Fraudsters Applying for Remote Employment

Paycheck Is a Path to Insider Access at Tech Companies
FBI: Deepfake Fraudsters Applying for Remote Employment

That candidate for a remote software coding position may not actually exist, at least not as presented, the FBI says in a warning for tech companies to be on the lookout for deepfake applicants.

See Also: 2022 Ponemon Cost of Insider Threats Global Report

Threat actors are using a combination of stolen personally identifiable information and advanced imaging technology to spoof tech companies in the hopes of securing remote employment, states an advisory from the FBI's Internet Crime Complaint Center.

The goal is to gain insider access to customer and financial data, corporate databases and proprietary information.

Deepfakes use artificial intelligence to superimpose someone's likeness or voice onto someone else, even in real time.

The technique isn't foolproof: The FBI says prospective employers have caught on to the fraud when the actions and lip movements of interviewee didn't quite sync up.

The FBI warning comes just weeks after another warning from the federal government advising employers to be on the lookout for North Korean information technology workers posing as legitimate teleworkers.

Malicious Use of Deepfakes

Combining stolen personally identifiable information with deepfakes is a new threat tactic, says Andrew Patel, senior researcher at the Artificial Intelligence Center of Excellence at cybersecurity firm WithSecure.

Don't count on deepfakes always being easy to catch, he warns. As they mature, deepfake technologies "will eventually be much more difficult to spot. Ultimately, what we're seeing here is identity theft being taken to a whole new level," he says.

Cybercriminals have already used voice impersonation technologies to bypass voice authorization mechanisms and for voice phishing, or vishing, attacks. Threat actors can impersonate their targets and bypass security measures such as voice authentication mechanisms to authorize a fraudulent transaction or spoof the victims' contacts to gather valuable intelligence, the Photon Research Team, the research arm of digital risk protection firm Digital Shadows, told ISMG (see: Deepfakes, Voice Impersonators Used in Vishing-as-a-Service).

APT-C-23, part of the Hamas-linked Molerats group, reportedly targeted Israeli soldiers on social media with fake personae of Israeli women, using voice-altering software to produce convincing audio messages in female voices. The messages reportedly encouraged the Israeli targets to download a mobile app that would install malware on their devices.

In July 2019, cybercriminals impersonated the chief executive of a U.K.-based energy company using a voice-cloning tool in a successful attempt to receive a fraudulent money transfer of $243,000, The Wall Street Journal reported.


About the Author

Prajeet Nair

Prajeet Nair

Assistant Editor, Global News Desk, ISMG

Nair previously worked at TechCircle, IDG, Times Group and other publications, where he reported on developments in enterprise technology, digital transformation and other issues.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.co.uk, you agree to our use of cookies.