Creating fake social media accounts to trick people is not a new tactic , but this new strategy is a bit sinister to make the event stand out from the crowd. In-depth analysis is published to KrebsOnSecurity blog claims that cybercriminals have been using artificial intelligence (AI ) create a profile picture of someone who doesn’t exist and pair that info with a job description stolen (Opens in a new tab)
From real people on LinkedIn.
so they create fake profiles that are nearly impossible to identify as fake for most people.
Numerous use cases
User found suspicious account attempts There is a growing trend to access various invite-only LinkedIn groups. Group owners and moderators can only find out what’s going on after receiving dozens of these requests at once, and seeing that nearly all profile pictures look the same (eg, same angle, same face shape, similar smiles) Wait) . The researchers say they’ve reached out to LinkedIn’s customer support, but so far the platform hasn’t found its panacea. One way to address this challenge is to require certain companies to send complete employee lists and then ban all accounts claiming to work there. In addition to being unable to pinpoint who was behind this onslaught of fake professionals, researchers are also trying to understand what it all means. Apparently, most accounts are not monitored. They didn’t post anything and didn’t respond to messages.
Cybersecurity firm Mandiant believes the accounts are being used by hackers to try to play roles at cryptocurrency firms, the first of a multi-stage attack stage, the goal of which is to drain the company’s funds.
Others believe this is part of an old romance scam where victims are lured by pretty pictures to invest in fake crypto projects and trading platforms.
In addition, there is evidence that groups such as Lazarus use fake LinkedIn profiles to spread info-stealers, malware and other viruses among job seekers, especially is in the cryptocurrency industry. Finally, some believe these bots could be used to amplify fake news in the future.
In response to KrebsOnSecurity’s research, LinkedIn said it was considering the idea of domain verification to address the growing problem: “It’s an ongoing challenge , we are continually improving our systems to stop fakes before they go live,” LinkedIn said in a written statement.