Subscribe
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The FBI are cracking down on people using deepfakes to apply for remote jobs

Companies looking to hire somebody with IT abilities might want to ask themselves whether the individual they are interviewing is real or not.

The FBI has uncovered numerous cases involving serious technology crimes, making it essential for companies to remain vigilant against impostors. These fraudsters use images, videos, and voice recordings to secure positions in technology, programming, database, and software firms. By exploiting stolen data, they can convincingly impersonate anyone they choose.

Many of these fake job applicants may have gained access to sensitive consumer or corporate information, including financial and proprietary data, suggesting that their intentions extend beyond simple fraud. They may aim to steal valuable information while deceiving the company. The extent of successful fake job applications remains unclear compared to those that have been detected and reported.

A more alarming possibility is that some of these impostors may have accepted job offers, received salaries, and later been arrested. Instances have been reported where voice spoofing techniques were used during online interviews, with the candidate’s lip movements not matching the audio. In some cases, applicants coughed or sneezed, yet the video spoofing software failed to detect these discrepancies.

In May, the FBI issued a warning to businesses about North Korean government operatives seeking remote IT and other technical jobs. These impostors often use fake documents and credentials to secure remote work through platforms like Upwork and Fiverr. The federal agency's report detailed how some fraudsters used multiple shell companies to conceal their identities, making detection even more challenging.

While deepfake technology has advanced significantly, some of the more rudimentary attempts still result in mismatches between fake voices and the speaker’s mouth movements. Detecting these fake videos can be difficult, especially if one is not actively searching for them. Creating a lifelike human in a video is more complex than it might appear, and unnoticed fakes can slip through the cracks.

Researchers at Carnegie Mellon University have developed artificial intelligence capable of recognising edited videos with an accuracy ranging from 30% to 97%. Those familiar with identifying certain visual inconsistencies, such as incorrect shadows or unnatural skin textures, can detect phoney videos more effectively.

September 5, 2022
Pavel Durov Charged in France: What It Means for Crypto and Tech
August 30, 2024

Telegram CEO Pavel Durov Charged in France: Impact on Crypto, IPO, and Tech Regulations

Read more
Telegram Turmoil: Pavel Durov's Arrest and Its Implications
August 27, 2024

Pavel Durov, the founder of Telegram, was arrested in France on August 27, 2024.

Read more
Warner Music Group has teamed up with OpenSea
October 6, 2022

In an effort to increase Web3 opportunities for artists, Warner Music Group has teamed up with OpenSea.

Read more
Close Cookie Preference Manager
Cookie Settings
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage and assist in our marketing efforts. More info
Strictly Necessary (Always Active)
Cookies required to enable basic website functionality.
Made by Flinch 77
Oops! Something went wrong while submitting the form.