With all the excitement about artificial intelligence (AI) in our newsfeeds, one could wonder if AI is about to transform business, take your job, become your medical advisor, replace your personal trainer, or serve as your full-time butler and autonomous driver.

Yes.

And no.

Maybe.

And not yet.

A recurring aspect of my role in helping CIOs develop data strategies is separating fact from fiction about the power of AI and delivering the appropriate approach to empower each organization's strategy. With the success of Facebook, Amazon, Netflix, Google and other high-growth companies partly attributed to their successful application of machine learning to vast flows of data, many organizations are starting to adopt AI for their business.

Back to school

So to help me understand AI in the context of organizational strategy and technical capabilities, I went back to school. I needed to learn:

I spent the last year in several educational programs designed to train AI data scientists, application developers, CEOs, CDOs and CIOs.

You can take the same courses I took.

I was helped by courses on Machine Learning and Deep Learning, available on Coursera. These curricula are among the most popular on Coursera, reflecting the popularity of data science and AI initiatives, and are based on curricula developed by Andrew Ng, considered one of the founding fathers of Deep Learning.

I also completed a specialized post-grad Executive Education program in AI Business Strategy offered by MIT Sloan and the MIT Computer Science AI Lab (CSAIL). Separately, I gained certification in Deep Learning via training sponsored by Nvidia. Nvidia is a major supplier of the GPU technology that accelerates the computational performance often required by Machine Learning.

With this foundation, I was able to use Deep Learning to build and train my own computer vision model that recognized and identified humans vs other objects and creatures, using an open source image library and existing frameworks to train my model.

Lessons learned

Sometimes older dogs can learn new tricks. Here is some of what I learned about AI:

Intelligence and history

  • AI mimics human intelligence, it does not replace human intelligence.
  • Alexa and Google Duplex may pass the Turing test but cannot reason or develop contextual understanding. Yet.
  • Conceptually, AI has been around since the 1950s. Early attempts at AI were limited by insufficient data, a shortage of computational power, and siloed, proprietary algorithm development.

Machine Learning

  • Machine learning models are not programmed like conventional algorithms; they are trained using data. Machine learning models are usually supervised or semi-supervised.
  • Machine learning is a mathematical technique used to train algorithmic models. First, a human (or a previously trained model) trains a set of data. The computer ("machine") then continually "learns" based on a subsequent series of predictions about new data that is introduced. These predictions then layer upon each other to improve their effectiveness. A machine learning model continually refines its learning to successfully predict the next image, word or transaction.
  • Examples of machine learning include Facebook image recognition, many chat bots, and recommendation engines seen for Amazon and Netflix.

Deep Learning

  • Deep learning is a technique used to train algorithmic models that is often based on neural networks running computations on massive amounts of data. A deep learning model develops its own internal logic to make predictions about new data it encounters. Deep learning models largely train themselves (unsupervised or semi-supervised) and are refined via inference from applying predictions against the new data.
  • The training of deep learning models is computationally intensive and includes an insatiable hunger for data. Examples of deep learning at work include the image processing of MRIs, facial recognition, and drone video image recognition.
  • Neural networks can be difficult to interpret. A deep learning model can recognize a human face but cannot describe a particular face or explain "why" the image is human.

Some functional applications of machine learning.

Limitations

  • Human intelligence is characterized by contextual reasoning that can often be applied to new situations to quickly form situational understanding. In contrast, AI is narrow and specific. To date, there is no general or "strong AI." So, with apologies to Ray Kurzweil, we are generations away from the singularity.
  • AI is limited to the specific data set and context for each algorithmic model. AI does not reason; instead, AI makes statistical predictions about the next face, the range of appropriate words and sentence construction to offer next, the next transaction, or the next move to make.
  • The best AI model for chess can beat the best grandmasters in chess, but the same triumphant chess model cannot play a game of "Go." AI chess models use search optimization to predict moves. AI Go models use image recognition to determine moves.
  • A machine learning model can be trained on several specific skills, but it can't autonomously combine those separate skills to create a new skill.

Ethical Implications

Ethically, the rise of AI creates challenges to consider related to labor, model interpretability, model bias and autonomy. For example:

  1. Jobs: What happens to workers in highly repetitive jobs that may initially be augmented, then later fully performed by AI-trained applications?
  2. Interpretability: If my doctor doesn't know exactly how my AI-generated medical recommendation was derived from a neural network, can my doctor rely on it to treat me? Who gets to interpret the model so that we can trust the recommendation?
  3. Bias: Why are bot assistants (Siri, Alexa, etc.) programmed to feature a female voice? Machine learning-derived employment apps have been found to display gender and racial bias. Is the bias tied to the source data or to the algorithm? Did a job applicant get turned down by a program? Would Einstein have been turned away from an applied mathematics position?
  4. Autonomy: Serial or in parallel? When does my self-driving car take over from me? When I own a self-driving car, who bears liability for accidents, me or the software?

Just the Beginning

Finally, I learned that we're only at the beginning of the machine learning age. Data about you and your behavior is flowing to organizations every time you change channels on your TV, settle down with your interactive game of choice, watch or listen to streaming content, adjust your thermostat, regenerate solar power back into the grid, cross a border, walk through a train station or airport, get detected by a retailer's wi-fi network, use your navigation app, click through an ad on Instagram, or ask Alexa to find something for you.

The future is now

Although you may not have an interest in machine learning, machine learning most certainly has an interest in you. AI is no longer just a "data science thing." To rephrase William Gibson, the AI future is already here, it's just that it's not evenly distributed.

C-level executives, technologists and analysts should know enough about AI to understand the uses, advantages, infrastructure and constraints of machine learning. Invest the time to understand machine learning so that you can invest your resources wisely.

Courses on machine learning are readily available, and getting started is easy. You can even start your education for free by visiting Google's homepage: their machine learning-powered search algorithm will finish your search string before you can even finish typing what you're thinking.

In Part II, I discuss the business strategy implications of machine learning and AI initiatives.