What is Artificial Intelligence?
In this article
Artificial intelligence (AI) is a branch of computer science that deals with the goal of creating intelligent machines and systems capable of decision-making, reasoning, learning and acting on their own.
Instead of a standalone technology, AI relies on an array of high-performance architectures to function. This technology stack includes hardware and software components spanning automation and orchestration, high-performance computing, high-performance storage, and high-performance networking.
This purpose-built infrastructure supports the most common AI-driven technologies in use today, such as machine learning (ML) and deep learning (DL), natural language processing (NLP), large language models (LLMs) and generative AI, computer vision, evolutionary computation, robotics and robotic process automation (RPA), speech and pattern recognition, cognitive computing, expert systems, augmented reality, biometrics, facial recognition and more.
A mature and coherent data strategy is likewise important as the success of any AI/ML program or product ultimately comes down to a question of data quality.
3 notional stages of AI maturity
We can categorize AI maturity into three stages based on current capabilities and theoretical future advancements. If you think of AI as a mountain to be scaled, we've successfully reached basecamp in the first stage of our climb. As we gaze up to the peak — assuming there is one! — a navigable path to the future stages of AI maturity remains very much clouded. Our ability to understand and overcome the engineering and philosophical challenges that litter this rocky terrain continues to be a topic of debate in academic circles and AI research and development (R&D) teams.
The three stages of AI maturity are:
- Artificial Narrow Intelligence (ANI): Also known as "weak" AI, this stage of maturity encompasses all existing AI models created to date. The learning algorithms used in these systems are designed to autonomously perform specific functions and are unable to do more than one narrowly defined task without human intervention. When we talk about AI applications in business, we're always talking about ANI.
- Artificial General Intelligence (AGI): Yet to be achieved, this theoretical stage of AI maturity would involve the ability to closely mimic the complexity of human thought. An example would be a machine that can fluidly learn, adapt, perceive and understand the world around it in the same versatile, multifaceted ways we humans do.
- Artificial Super Intelligence (ASI): This hypothetical peak of maturity will be reached if and when an AI develops self-aware cognitive and thinking abilities; when its intelligence — including the ability to think in abstractions — surpasses that of the smartest humans across a broad range of subject matters and capabilities.
Organizations across industries are applying ANI to a wide range of problems, including game playing, medical diagnosis, speech recognition, content generation, visual indexing and generation, translation and much more.
Recent advances in large language models (LLMs) and generative AI (GenAI) have moved ANI closer than ever to mimicking human intelligence and complex decision-making.
With all the excitement generated by LLMs and GenAI, you may be asking where these AI advancements came from.
A brief history of large language models (LLMs)
Rome wasn't built in a day, nor were popular LLMs like ChatGPT. In fact, the NLP-based foundations for modern LLMs have been incrementally laid over the past 75 years. We can divide that journey into four eras of NLP progress:
The first era, spanning more than 50 years, includes both rule-based and statistical-based approaches to NLP. In a rule-based approach, a particular set of predefined linguistic rules or patterns is applied to analyze, process and extract information from a textual data set. Statistical language models, which take a machine-learning approach to NLP, came into vogue around 1990. They learn the probability of a word's occurrence from analyzing large amounts of textual data. These early models were considered state-of-the-art thanks to the limited compute capabilities available at the time. The obvious limitation was that they could not capture semantic meaning.
As Moore's law played out and compute power continued to grow at exponential rates, the millennium ushered in a new era of neural networks. Built to mimic the intricate networking of the human brain, these models could capture the semantics of a dataset while predicting the next word in a sequence. Still, neural networks were limited in terms of the ability to make associations and use long-term memory.
In 2017, researchers at Google publicly released a paper titled, "Attention is All You Need." This paper introduced the breakthrough concept of a "transformer" — a neural network architecture that pays attention to all the words in a sentence simultaneously (instead of processing each word on its own like older models). While transformers helped overcome prior limitations around associations and long-term memory, the most important benefit was that transformers were much cheaper to procure computationally compared to preceding designs that relied on recurrent neural networks (RNN) or long short-term memory networks (LSTM) components. Which has enabled more data scientists and engineers to build much deeper networks that perform better. The main limitation of transformer-based neural networks is the massive amounts of data required for model training.
Finally, Google's transformer revolution inspired the development of pre-trained LLMs, which leverage a series of transformers trained on enormous amounts of datasets scraped from the internet. Generative AI products like ChatGPT, Midjourney and Stable Diffusion and more soon followed, leveraging their predictive abilities to generate content and imagery based on user input in a convincingly (and sometimes eerily) human-like fashion.
Benefits of using AI in business
AI can offer tremendous value and benefits to organizations looking to improve business processes and operations, including:
- Improved productivity: AI-powered automation technologies can handle tedious and repetitive tasks, freeing up human capital for more strategic and high-value activities.
- Reduced errors and delays: AI can help minimize costly mistakes and eliminate bottlenecks, improving operational efficiency.
- Accelerated business processes: AI can shrink the product development lifecycle and speed time-to-market, giving businesses a competitive edge.
- Improved customer experience: AI-powered chatbots and other customer service tools can provide 24/7 support to end-users, improving their satisfaction and loyalty.
- Data-driven decision-making: AI can help businesses make more intelligent decisions by providing insights from large datasets.
- Enhanced compliance and governance: AI can help businesses better comply with regulations and enforce data governance policies.
Applications of AI in business
AI-driven technologies can be found in many business applications today, such as:
- Customer service: AI-powered chatbots can answer customer questions, resolve issues and provide support 24/7. This can improve customer satisfaction and reduce the cost of customer support.
- Marketing: AI can be used to target ads more effectively, personalize customer experiences, and generate content. This can help improve the effectiveness of advertising campaigns and increase return on investment.
- Supply chain management: AI can be used to forecast demand, optimize inventory levels, and manage logistics. This can help reduce costs and improve customer service.
- Cybersecurity measures and data protection: AI can be used to detect fraudulent transactions and prevent fraud. This can help protect businesses from data breaches and other security threats.
- Risk management: AI can be used to assess risk and make better decisions.
Risks and other considerations for AI adoption
Key risks and limitations of modern LLM and GenAI models include:
- Bias: Can we ensure our models don't generate outputs that disproportionately impact certain individuals or groups?
- Reliability: Can we trust outputs to be factually accurate?
- Control: Can we prevent outputs that are harmful, toxic or obscene?
- Explainability/interpretability: Can we explain why a model generated a certain output?
- Cost: Can we afford to build, train and customize LLMs for enterprise use?
- Copyright: What's the legality of training models with protected content?
- Privacy: How is user input data used or shared?
- Security: How easy is it for bad actors to access and bypass model security features?
- Education: Do operators understand current model limitations, including the need to validate data quality on the way in and double-check outputs for accuracy?
Organizations must be aware of these risks and take steps to mitigate them. Other key challenges to consider include:
- Data requirements: AI models require massive amounts of data to train. This data can be expensive to collect and prepare in terms of quality.
- Technical expertise: Implementing the right AI model requires technical expertise. Businesses need to have the right people and resources in place to implement and maintain AI systems if they hope to achieve their business goals.
- Established best practices: As a relatively new field, there are fewer documented processes and standardized methods for AI implementation. Many data scientists and programmers must build a practice from the ground up within their companies, overcoming both technological and organizational challenges along the way (see AIOps).
- Responsible AI and ethics: AI systems can make decisions that have a significant impact on people's lives. Businesses need to be aware of the ethical implications of using AI and take steps to mitigate any potential risks. You must also ensure data integrity and develop trust in data throughout the organization, so stakeholders will utilize insights derived from AI-powered technologies to inform decisions.
Working with a knowledgeable partner who has extensive experience applying AI to various businesses and industries can help you shorten the learning curve and optimize your resources. Get in touch to learn more about AI, its modern business applications, and how WWT can help you maximize your investment in AI technologies.