Will AI stab your cybersecurity efforts in the back?

There’s no doubt that AI is going to be a powerful force for good in cybersecurity.

But (and there is always a but), AI is relentlessly bipartisan. This means that hackers have the same opportunities as the rest of us to leverage AI’s capabilities for their own business purposes – and that includes using it to get their hands on your precious data.

So, how do you stay ahead of the AI game? How can you make sure the odds are stacked in your favour?

First, let’s look at what you’re up against.

The hackers’ AI arsenal

The term weaponisation is commonly used to describe how hackers are using AI to counter the advances in cybersecurity solutions. And there’s never been a more appropriate term.

Happy hackers are leveraging AI-enabled attacks to full effect and rejoicing in their impact. With the ability to automate attacks at scale, generate sophisticated phishing emails or launch automated botnets at their fingertips, it’s no wonder AI is the new darling of bad actors worldwide.

What does an AI attack look like?

Undercover operatives: Malware and attacks are being refined and made more subversive as hackers conceal malicious code in seemingly benign applications – to be triggered once a specific number of users have subscribed, a set time has passed, an authentication process actioned through voice or visual recognition, and more. Like remote-controlled land mines, they’re designed to do maximum damage to people and businesses when they are most complacent.

Where does AI come into this? By using AI models, hackers use these concealed codes and information and derive private keys to control the time and place the malware will explode into life. These models can remain inert for years, undetected and unsuspected until the hacker decides the time is right.

Learning on the job: One of the most loved characteristics of AI technology is its ability to acquire knowledge and learn on the job. So, during an attack, an AI-based program can also collect information on what went well and what didn’t and adapt – becoming more effective every time.

Counter-intelligence: Bad actors use AI to launch intelligent attacks that spread through your network or system, looking for unpatched vulnerabilities. Even patched vulnerabilities aren’t safe, as an intelligent attack will look for alternative ways to circumnavigate them and compromise your system.

Silent and stealthy: Here, AI-enabled malware programs mimic system components you trust. They automatically learn when you regularly apply patches, your communication protocols, and when your system is most vulnerable. Once these factors are known, hackers stealthily execute an undetectable and untraceable attack and come and go at will.

For example, the 2018 AI hack attack on TaskRabbit, an online platform connecting freelance taskers and their clients, compromised 3.75 million users. The subsequent investigation into the grand-scale crime could not trace the attack.

These are just a few of the ways hackers are leveraging AI to break bad. In their article “AI And Cybercrime Unleash A New Era Of Menacing Threats”, Forbes discusses AI-powered cyberattacks you can look forward to, including business email compromise (BEC), advanced persistent threats (APTs), fraudulent transactions, payment gateway fraud, DDoS attacks and theft of intellectual property.

The good guys, and what they’re doing to save your bacon

As expected, cyber security vendors are fighting fire with fire – or, more accurately, AI with AI and ML.

Tech developers everywhere are integrating more AI into their tech to improve data protection and counter new threats as they evolve. However, the rate of adoption has been somewhat uneven. While some recognised the value of AI and started incorporating it into their solutions a decade ago or more, others are comparatively late to the game and are now faced with accelerating their efforts to match those of the cybercriminal community.

CrowdStrike is one of the strongest of the early AI adopters. In 2022, Frost & Sullivan awarded CrowdStrike its Global Technology Innovation Leadership Award in the endpoint security sector, commenting: “CrowdStrike leads the industry with regards to the application of artificial intelligence/machine learning to endpoint security, as well as providing unparalleled prevention of malware and malware-free attacks on and off the network.”

But given the growth in vulnerability exploitation, is enough being done – and in time? A recent threat report from Palo Networks Unit 42 found that the average days from compromise to exfiltration was 44 in 2021, 30 in 2022 and 5 days in 2023 – with incidents now of data exfiltrated in a mere handful of hours.

Are insider AI slip-ups more insidious than cybercrime?

While AI-empowered cyberattacks are to be feared, you should be equally (if not more) wary of data leakage by using generative AI tools like ChatGPT. According to TechTarget, risks include employees exposing sensitive business data, security vulnerabilities in actual AI tools, data poisoning and theft, and breaching compliance obligations.

As criminals increasingly exploit generative AI products like ChatGPT, a self-described ‘wave of innovation’ from leading vendors like Microsoft has emerged. This includes Microsoft unveiling its Security Copilot tool for cybersecurity professionals in March 2023 and announcing plans to embed generative AI technology throughout its broad portfolio of security products.

CrowdStrike also announced the release of a new product (CrowdStrike Falcon® Data Protection) designed to comprehensively track data in the modern cloud and AI era and make generative AI safe for enterprises.

Such is the danger that in January 2024, ASD (Australian Signals Directorate) released a new publication, “Engaging with Artificial Intelligence,” which includes advice for medium to large organisations on how to securely engage with AI and stop malicious and accidental exposures in real-time. The new guidance applies to a range of stakeholders, from programmers to end users, senior executives to analysts and marketers.

While we suggest you download the publication to read it in full, some key takeaways include examining how your organisation manages privileged access to the AI system, backups of your AI system, ensuring the system is secure by design, ensuring your organisation is adequately resourced with suitably qualified staff to make sure the system is set up, maintained and used securely, system health checks, logging and monitoring – and more.

If you would like to safeguard your business against today’s and tomorrow’s cyber threats (AI-enabled and otherwise), feel free to contact the Amidata team.


Read more tech news

Know thy enemy: Traversing the 2024 global threat landscape.

Sun Tzu (771–256 BC), a Chinese military general, strategist, and philosopher, showed remarkable prescience when he said, “Know thy enemy…...

Read more

Hybrid cloud data management – in search of that silver lining.

Hybrid cloud is here to stay. According to the recent Global Hybrid Cloud Trends Report, an impressive 82% of IT…...

Read more

Cybercriminals: 2023 was not the year of rest for the wicked

Make no mistake, 2023 was a big year for cybercriminals targeting Australian businesses. So much so that you may be…...

Read more