Though generative AI (GenAI) has been decades in the making, Open AI's launch of ChatGPT took the world by storm. It reached one hundred million users in just two months, making it one of the fastest-growing applications of all time. GenAI and large language models (LLMs) have the potential to revolutionize business and daily life just as the internet, email and smartphones did. 

Organizations across the globe are weighing the security implications of adopting  GenAI and LLMs in a corporate environment, and it's crucial that we proceed with the necessary caution. But vendors are racing to release their own models, and security teams need to embrace the moment. 

The sheer amount and complexity of data and threats, in the midst of an ongoing talent shortage, have become impossible to tackle. "You can't handle security at a human scale anymore," said Jeetu Patel, Cisco's executive vice president and general manager of security and collaboration. "You have to handle it at a machine scale." 

GenAI is the shot of health and innovation that security teams desperately need. The technology offers new ways to manage threats, evaluate IT ecosystems and operations, increase speed and efficiency, augment staff and resources, and boost productivity. It's an exciting time for cybersecurity professionals.

Given the many known and unknown security risks of GenAI, how can security teams take advantage of these tools? We've identified four use cases to start integrating GenAI into security operations without compromising your organization's data and corporate assets. 

Recommended reading: For more insight into how organizations are securing AI, check out WWT Research - Secure Your Future: A CISO's Guide to AI 

Use cases for generative AI in cybersecurity 

There are many ways your security team can take advantage of GenAI without risking your organization's proprietary data. We like to think of these tools as the smartest intern you'll ever hire; results must be validated, but the educated and efficient output is a considerable timesaver. 

  • Augment security talent and resources: The cybersecurity industry has a significant shortage of skilled practitioners with thousands of unfilled job openings. Security teams must bridge this gap while addressing ever-increasing security demands with finite budgets. GenAI can help ease this burden and function as an extremely efficient digital assistant for basic tasks, such as researching vendors and emerging threats. This saves time for staff to focus on more advanced work.
  • Incident response and threat hunting: Telemetry has become a challenge as organizations continue to collect and store unprecedented amounts of data in disparate locations. In this data swamp, it becomes nearly impossible to detect, prioritize and hunt threats effectively. GenAI offers a way to bring all of this data into one repository, which can then be queried to detect security incidents. These tools can also help practitioners write scripts to automate the detection of anomalies that indicate potential attacks.
  • Policy management: Policies that govern an organization's security practices must be developed, implemented and updated regularly in an ongoing and resource-intensive process. GenAI tools can recommend, validate and draft security policies based on your organization's threat profile, technology infrastructure and business objectives, and ensure policies are comprehensive across your IT environment. You can also use  GenAI to validate your existing policies against industry best practices and regulatory requirements.
  • Zero trust: Zero trust continues to be a driving force in cybersecurity, and AI will help execute zero trust strategies with increased accuracy. GenAI can automate and continually assess and assign risk scores for endpoints, and review access requests and permissions.
  • Reporting: GenAI can help streamline and automate the process of drafting reports, including requirements for incident response, threat intelligence, risk assessments, audits and regulatory compliance. With the help of natural language processing (NLP), these tools can translate technical data into drafts that are concise and easily understood by stakeholders across the organization with varying levels of technical knowledge. GenAI can also be leveraged to create templates that can be evaluated against industry standards and best practices to ensure consistent and adequate reporting.

Select the best generative AI tools for security operations 

Vendors are rushing to market with new GenAI tools and integrating AI and LLM features into existing products. As you evaluate these new tools, it's important to keep the fundamentals of tools rationalization in mind. What feature sets do you truly need? Are there any areas where AI would dramatically increase productivity and generate the largest ROI? 

Before adopting any GenAI model or LLM for enterprise use, consider forming a committee with representation from every relevant business unit. This will allow you to establish necessary use cases and outline proper usage guidelines and policies. The committee can also help prevent the development or integration of AI models in silos and discourage the use of shadow AI. 

When it comes to selecting a vendor, asses the security practices of each, taking into account data handling procedures and any prior audits or assessments. You'll also need visibility into the underlying algorithms and methodology of the tool. This will ensure you can trust, control and properly secure the generated data. 

Then, we suggest conducting pilots of different products over time, similar to how other security tools and vendors are evaluated. Digital twins can simulate your environment and test different scenarios with your chosen vendors. This controlled environment will give you a clearer picture of how effective and accurate the tool is for specific use cases. A trusted partner can help demonstrate how these tools work by replicating your environment and running different models over time.

Final thoughts 

Embracing  GenAI and LLMs as an organization reduces the likelihood that people will use these tools in secret and risk corporate data. CISOs and their teams have a key role in stewarding the usage of GenAI and leading the safe implementation and integration of these tools. 

By incorporating the tactics above, security teams can embrace this technology while still protecting the organization's most important assets. 

Learn more about AI and data services.
Explore