Meta Tests Facial Recognition for Scam Ads and Account Recovery

Meta

Image Credit

Meta is enhancing its efforts to combat scams by expanding the use of facial recognition technology, particularly targeting "celeb-bait" advertisements that exploit images of public figures. The announcement was made on Monday by Monika Bickert, Meta's Vice President of Content Policy.

In a blog post, Bickert explained that these tests are designed to strengthen existing anti-scam measures, which already include automated scans powered by machine learning classifiers. The goal is to make it increasingly difficult for fraudsters to deceive users on Facebook and Instagram into engaging with fraudulent ads.

“Scammers often utilize images of celebrities and content creators to lure individuals into clicking on ads that lead to scam websites, where they may be asked to provide personal information or send money,” Bickert stated. This tactic, referred to as “celeb-bait,” is a direct violation of Meta's policies and poses risks to users.

The new system will employ facial recognition technology to verify the authenticity of ads flagged by Meta’s existing systems when they feature images of at-risk public figures. Bickert noted, “We will use facial recognition to compare faces in the ad against the public figure’s profile pictures on Facebook and Instagram. If we confirm a match and determine that the ad is indeed a scam, we will block it.”

Meta assures that this feature is strictly for combating scam ads, emphasizing that any facial data collected during this process will be deleted immediately after the comparison.

Early tests involving a select group of celebrities have yielded promising results in enhancing the speed and effectiveness of scam detection. Additionally, Meta believes this technology could also help identify deepfake scams that utilize generative AI to create misleading imagery.

In the coming weeks, Meta plans to notify more public figures who have been targeted by celeb-bait scams, informing them of their enrollment in this protective system. These individuals will have the option to opt-out at any time through their Accounts Center.

Meta is also exploring the use of facial recognition for identifying impersonator accounts on its platforms. This involves comparing profile pictures from suspicious accounts against those of verified public figures.

Furthermore, Meta is testing a video selfie feature aimed at helping users regain access to their accounts after being locked out due to scams. This method promises a quicker recovery process compared to traditional identity verification methods like submitting government-issued IDs.

“Video selfie verification expands options for account recovery and only takes a minute,” Bickert remarked. “While hackers may continue to exploit account recovery tools, this method will be more challenging for them to misuse than conventional document-based verification.”

All tests are being conducted globally; however, notably absent from these trials are the U.K. and EU due to stringent data protection regulations. Meta has been engaging with regulators in these regions as it seeks feedback on its initiatives.

As Meta navigates these challenges, it remains committed to enhancing user safety while addressing ongoing concerns about privacy and data protection in its operations.