In the fall of 2023, the two biggest topics in the IT world are generative AI (GenAI) and cybersecurity, particularly the threat of ransomware. Everyone is talking about the impact of large language models (LLMs) or (GenAI) on everything from media production to social engineering and hacking.

How Generative AI will change the threat landscap

We expect that GenAI will be used by both attackers and defenders to augment their capabilities. Here are some ways this will change the landscape:

  • Cat-and-mouse games:  As the offense and defense leverage new AI enhancements, the landscape will keep shifting with a constant evolution of exploits and countermeasures. AI will also facilitate a more rapid cycle time, increasing the pace of this evolution.
  • Increased sophistication: AI will be used to facilitate more complex attacks, making those attacks more difficult to counter. This may shift the balance of power toward the attackers.
  • Strain on resources: Organizations will need to develop and deploy AI-based countermeasures to compete with AI-augmented threat actors. With AI still an emergent field and resources still under development, this may prove difficult for small organizations already under strain from the existing threat landscape.

Generative AI will benefit security teams — and their attackers 

Here are some points to consider from the perspective of attack and defense:

GenAI will likely be used by attackers to deliver:

  • Enhanced attack automation: Everything is better, faster and cheaper with automation. That includes cyber attacks, which are already on a "make lots of attacks, you only need one to succeed," model. Anything that helps attackers scale their attack volume is going to be a huge advantage to them. There is also a real concern on the social engineering front, with AI helping to deliver highly sophisticated phishing attacks, including voicemail and chat messages. For example, many phishing attempts today are readily identified by bad grammar or spelling. AI will allow attackers to quickly review and edit the emails to appear more credible before sending them.
  • Polymorphic malware: AI can enable the development of highly changeable threat code variants, malware packages that are constantly changing to evade detection by current security tools. This may shift the power balance to the attackers as the defense struggles to keep up.

Generative AI can be used by defenders to provide:

  • Improved threat detection: GenAI can help power the current machine learning and activity-based threat analysis capabilities of current intrusion detection and threat hunting tools.
  • AI-augmented security: AI-augmented security tools can enable organizations to scale the capabilities of their existing security teams, therefore providing for the analysis of massive amounts of data in real or near real-time.
  • Simulation for training: GenAI can help organizations simulate ransomware attack scenarios for training and tabletop exercises.

The ultimate impact on the ransomware threat landscape will be determined by the speed and degree to which both sides embrace and leverage AI technology. Win or lose, generative AI will be sure to increase the tempo and potential impact of ransomware attacks.

How should organizations prepare for these changes?

To prepare for the impact of generative AI on ransomware and cybersecurity, IT organizations should consider the following three key actions:

  1. Invest in AI-enhanced cybersecurity solutions: In any arms race, being ahead of the opposition is crucial to maintaining your advantage. Organizations need to invest in and integrate AI-driven security solutions into their existing toolbox sooner rather than later. Threat actors will not be sleeping on this technology.
  2. Develop AI expertise in-house: Developing AI and machine learning expertise is crucial to enabling an organization's understanding of which AI-driven tools are going to best suit their requirements. Hiring or training IT professionals with AI knowledge will enable good decision-making and help ensure that organizations stay ahead of the AI adoption curve.
  3. Continuous training and education: The cybersecurity threat landscape is continuously evolving. Once AI and machine learning expertise is developed or acquired, it must be maintained. Organizations must encourage their IT team to participate in relevant training, attend conferences and engage with cybersecurity communities to share knowledge and best practices.

Additionally, it's essential to maintain a robust incident response plan that specifically addresses ransomware attacks. This plan should include regular backups, employee training on security best practices, and well-defined procedures for containing and mitigating ransomware incidents.

Next steps 

Organizations should regularly conduct assessments, evaluate their current security posture and outline specific steps that must be taken to develop a tailored strategy that incorporates AI-driven security tools to mitigate ransomware threats.

  1. Organizations will need to determine the right solution for their environment and threat level and develop integration plans and timetables.
  2. Training and Workshops: WWT can offer customers training programs and workshops to educate staff on AI and cybersecurity best practices. This information can help jump-start the internal talent development within the customer IT organization.

Many organizations find it helpful to engage a trusted partner for ongoing support and monitoring to ensure that AI-driven security solutions are operating effectively and adapting to evolving threats. 

Optimize your AI Security Strategy Accelerator