LinkedIn has this week announced a new AI image detector to catch fake profiles, that has a 99% success rate.
The platform’s Trust Data Team claims the new approach can catch falsified profile images and remove fake accounts before they reach LinkedIn members.
This latest security innovation comes a few months after it was revealed LinkedIn was featured in over half of Q1 2022 phishing attacks.
Why are People Creating Fake LinkedIn Profiles?
As well as Twitter, LinkedIn has had trouble in recent times with the amount of fake profiles on its site. In the first half of 2022 alone, the platform had detected and removed 21 million fake accounts.
But why exactly are all these fake profiles popping up? For some it’s to create trust amongst visitors to their websites, for others it's rooted in SEO purposes under the false belief that Google ranks articles with authors higher than ones without.
Whatever the motivation, there’s no doubt it’s even easier to create a fake profile thanks to the advances in AI.
“We are constantly working to improve and increase the effectiveness of our anti-abuse defenses to protect the experiences of our members and customers. And as part of our ongoing work, we’ve been partnering with academia to stay one step ahead of new types of abuse tied to fake accounts that are leveraging rapidly evolving technologies like generative AI.” – LinkedIn’s announcement on its new approach
Fake Accounts Have Become Harder to Catch
This new approach comes following lengthy research into recognizing the structural differences between AI-generated faces and real faces – something most people don’t know how to spot.
LinkedIn keeps a close eye on unwanted activity that could pose a security risk, such as fake profiles and content policy violations. However, the sophistication of AI-generated images has, until recently, proved to be impossible to detect.
The key in solving this has been to know exactly what to look for. According to LinkedIn, AI-created images all share similar patterns that they call ‘structural differences’. Real images don’t have these structural differences.
An example in their blog post references a test of 400 AI-generated images vs 400 real ones. While the real images were displayed in clarity, the AI-generated ones grew increasingly blurrier, showcasing that areas around the eyes and nose tend to be very similar in fake photos.
While there’s no doubt that AI appears relentless in its potential security risks, LinkedIn’s latest development can be seen as a success against fake data.