Manage Generative AI Before It Manages You
When it comes to AI, governance and risk management should remain top priorities as IT leaders are being inundated with requests for how best to implement the technology.
This was originally published May 2023
The meteoric rise of generative AI — particularly the adoption of OpenAI's ChatGPT — is putting immense pressure on IT leaders to quickly determine how best to harness the technology effectively and safely for their organizations.
Generative AI holds tremendous potential to disrupt productivity across your entire organization. As such, investing in or building AI and analytics solutions that make better use of data remains a strategic priority — one we advise all our clients to focus on.
For now, generative AI should be treated with caution. Unless using a proprietary generative AI solution, the technology is not ready for critical or semi-critical enterprise use cases. Governance and risk management must be top of mind for IT leaders being inundated with requests for how best to implement the technology.
Remember, large language models (LLMs) do not understand, they predict. LLMs are neural networks that take massive amounts of data and synthesize it into probabilities of the next word, phrase or thought in a sentence. A good rule to keep in mind: generative AI is frequently wrong (even by its creators' own admission), but it is always confident.
You can think of generative AI as a highly skilled intern — a resource with great potential who still requires training, who needs to learn your company playbook and culture, and who must familiarize themselves with organizational structure and policies.
The potential is palpable: cost savings via automation, scalability that can handle large volumes of tasks and responsibilities, 24/7 availability, and a significant boost in productivity through efficiency. But the risks are just as powerful: the pervasive spread of misinformation, exposing sensitive data and information, ineffectiveness based on low data quality, and potentially high costs of ownership and operation.
How should you approach generative AI?
Governance and policy
First and foremost, you need to have a generative AI governance policy in place.
Even if it's preliminary in nature, you or your leadership team should be communicating with the rest of the organization about your evaluation of generative AI tools, such as ChatGPT, and provide specific guardrails or restrictions you expect to be followed while you continue to evaluate the technology's enterprise readiness.
Your policy should emphasize that generative AI is a fast-moving area that shows promise and that your team will continue to evaluate it. While creating your policy, key considerations include:
- Is your data governance sufficient and thorough enough to support LLM use cases?
- From an information security perspective, are the LLM options in consideration compliant with your policies?
- Can you explain how it works? If not, you can't fully articulate all the ways you can win and lose with the technology.
- The massive popularity of ChatGPT is likely creating "shadow AI" in your organization. Will IT control that? Can your IT teams own the education and user adoption processes?
- Be thinking about roadmaps for implementation, human-centered service design and automation vs. augmentation scenarios. Implementation dependencies include things like prompt engineering, guardrails, administrative rights, machine learning operations (MLOps) and APIs.
- What will your total cost of ownership be (including the costs of licensing, compute and infrastructure, cloud, sustainability, etc.)?
Meanwhile, you need to be actively engaged with a partner or your technology vendor about suitability for purpose and the risks for your organization.
The reality is there will be a version of generative AI that is enterprise worthy, and you'll want to be prepared to act quickly.
Up next:
- Generative AI: Risks, Rewards and a Framework for Utilization
- Sound Data Strategy Paramount to Generative AI
- Common Pitfalls When Getting Started With Data Governance
List of key generative AI terms
For those unfamiliar with generative AI, here's a glossary of terms that should help you gain a clearer understanding of what all the hype is about:
- LLM or large language model: A type of AI algorithm that leverages deep learning techniques to process natural language to understand, summarize, predict and generate content; they have at least a few million parameters.
- GPT or generative pre-trained transformer: A type of LLM trained on a large corpus using the transformer neural network to generate text as a response to input.
- NLP or natural language processing: The processing of human language by a machine including parsing, understanding, generating, etc.
- Corpus: Essentially, the training data. A collection of machine-readable text structured as a dataset.
- Vector: The numerical representation of a word or phrase. A list of numbers representing different aspects of a word or phrase.
- Tokens: A unit of input text. A token is the smallest semantic unit defined in a document/corpus (not necessarily a word). ChatGPT, for example, has a 4,000 token limit. GPT-4 permits up to 32,000 tokens.
- Parameters: The weights or variables used to train a target model. For example, 187 billion parameters were used to train ChatGPT.
- Transformer: The algorithm behind LLMs. A deep learning model adopting the attention mechanism that learns different weights and the significance for each part of the input data in a robust manner.
- RL or reinforcement learning: A feedback-based machine learning paradigm where the model/agent learns to act in an environment to maximize a defined reward.
- RLHF or reinforcement learning from human feedback: A technique that trains a reward model directly from human feedback and uses the model as a reward function to optimize an agent's policy using RL.
- Inference: Testing the model. Feeding new data to the model to get its response/prediction.
This report may not be copied, reproduced, distributed, republished, downloaded, displayed, posted or transmitted in any form or by any means, including, but not limited to, electronic, mechanical, photocopying, recording, or otherwise, without the prior express written permission of WWT Research. It consists of the opinions of WWT Research and as such should be not construed as statements of fact. WWT provides the Report "AS-IS", although the information contained in Report has been obtained from sources that are believed to be reliable. WWT disclaims all warranties as to the accuracy, completeness or adequacy of the information.