(5 min read)
Cyber criminals are always looking for new ways to exploit technology for their own gain.
With the rise of artificial intelligence (AI) and deepfake technology, these criminals now have even more tools at their disposal. In this article, we will explore some of the ways that cyber criminals can use AI and deepfake technology to carry out their nefarious activities.
Recently, there’s been a lot of press coverage and Twitter chat about AI tools like ChatGPT. It’s not the only one, though – Microsoft has developed Bing into a AI writing tool and other options like copy-ai have been around for a while.
In fact, we asked ChatGPT to write this article for us. It did an OK job, but because of the nuances of cyber criminality, and the power of words, we’ve had to do a fair bit (well, quite a lot, actually) of editing to produce content that we’re happy to share.
However, while there might be a long way to go with AI writing tools for businesses like ours, they can still be used to speed up the cyber attack process – and to create other hacking and scam campaigns that are perhaps more likely to fool the target – who could be you.
AI-based phishing attacks
Phishing attacks are a common tactic used by cyber criminals to steal sensitive information from unsuspecting people. Typically, these attacks involve the creation of a fake website or email that appears to be from a legitimate source. The target is then tricked into providing their personal information, such as login credentials or credit card numbers.
With AI, cyber criminals can create even more convincing phishing attacks. By analysing online behaviour and preferences, AI algorithms can generate highly personalised messages that are tailored to the target's interests and needs. These messages are much more difficult to detect as fake, and as a result, people like us are more likely to fall for the scam.
That’s because if the context in an e-mail is relevant to you, you’re much more likely to miss the obvious phishing cues of errors: the vague salutation, or the strange link or email address, because the e-mail just ‘makes sense’ with what you’re familiar with. And AI can find relevant and up-to-date info very quickly and across a wide range. So, vigilance is key! And don’t click on anything or open anything unless you’re very sure!
Deepfake scams
Deepfake technology allows cyber criminals to create highly convincing videos or audio recordings that appear to be from a legitimate source. For example, a deepfake video could show a high-profile CEO giving a speech that they never actually gave. Similarly, a deepfake audio recording could be used to impersonate a bank representative and convince the victim to provide sensitive information.
These types of scams can be highly effective because they exploit the victim's trust in the source of the message. If the victim believes that the message is coming from a legitimate source, they are more likely to take action, even if the message seems suspicious.
What’s more, AI now allows these videos or photos to be made faster and easier. Convincing video can now be created from a single image – where it used to take many. This form of fakery still requires expertise, but it takes a lot less time to produce.
Malware attacks
Malware attacks are another common tactic used by cyber criminals. Typically, these attacks involve the installation of malicious software on the victim's device, which can then be used to steal sensitive information or carry out other malicious activities.
AI algorithms can scour social media and the internet at large to find out more about a potential target, potentially analyse their behaviour and certainly help criminals to consider new pathways – although it can’t, at the moment, develop the relevant code by itself. The point here too, is this will be much quicker and wide-ranging than if you or I had to do it ourselves.
And let’s not forget: AI cannot write perfect code, but it can offer up many suggestions at speed. And whilst AI is not creative in the true sense of the word, it can offer up many ideas, options, alternatives that might work and provide a good point for a malicious coder or developer to generate new ideas. A developer we know, testing ChatGPT, asked it to give 10 examples of code to address a particular issue. Whilst six were obvious and not exciting, and two plain wrong, two new ideas were generated that provided ways of thinking that our developer friend hadn’t thought of. In that way AI can be very useful for both legitimate and criminal ends.
Automated social engineering
Social engineering is a tactic used by cyber criminals to manipulate people into divulging sensitive information or taking other actions that are not in their best interest. Typically, social engineering attacks involve the use of deception, coercion, or persuasion to convince the victim to take the desired action.
With AI, cyber criminals can automate the social engineering process. Online scraping of social media and other sites for target data can be much more efficient. By analysing the target’s online behaviour and preferences, AI algorithms can generate personalised messages that may be more likely to convince the target to take the desired action. More realistically, as in the malware comments above, AI will allow more ideas and options to be considered by the human hacker, who may well become much more effective.
So, the answer to the question in this blog’s title is ‘yes’ - 2023 is likely to see an increase in deepfake hacking or scam attempts and perhaps a new wave of creativity when it comes to trying to steal your data or your money.
All the more reason why you, your business and your people need to be aware of the potential risks and know how to prevent them.
Find out more about how our online training episodes can help employees recognise these threats– just contact us today.
Sign up to get our monthly newsletter, packed with hints and tips on how to stay cyber safe.
Mark Brown is a behavioural science expert with significant experience in inspiring organisational and culture change that lasts. If you’d like to chat about using Psybersafe in your business to help to stay cyber secure, contact Mark today.