Introduction to Machine Learning
Overview
About
Trailer
01: Telling the Computer What We Want
Professor Littman gives a bird’s-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem.
02: Starting with Python Notebooks and Colab
The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.
03: Decision Trees for Logical Rules
Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I—as in “diet” and “weigh”? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.
04: Neural Networks for Perceptual Rules
Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.
05: Opening the Black Box of a Neural Network
Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop.
06: Bayesian Models for Probability Prediction
A program need not understand the content of an email to know with high probability that it’s spam. Discover how machine learning does so with the Naïve Bayes approach, which is a simplified application of Bayes’ theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam).
07: Genetic Algorithms for Evolved Rules
When you encounter a new type of problem and don’t yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim.
08: Nearest Neighbors for Using Similarity
Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad.
09: The Fundamental Pitfall of Overfitting
Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations.
10: Pitfalls in Applying Machine Learning
Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge.
11: Clustering and Semi-Supervised Learning
See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples.
12: Recommendations with Three Types of Learning
Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman’s dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix’s celebrated one-million-dollar prize for an improved recommender algorithm.
13: Games with Reinforcement Learning
In 1959, computer pioneer Arthur Samuel popularized the term “machine learning” for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today’s sophisticated algorithms for improving play—at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research.
14: Deep Learning for Computer Vision
Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters.
15: Getting a Deep Learner Back on Track
Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It’s an amazing feeling when you finally get the program working well.
16: Text Categorization with Words as Vectors
Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which “define” the meanings of words using vectors in high-dimensional space—a method that involves techniques from linear algebra.
17: Deep Networks That Output Language
Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic!
18: Making Stylistic Images with Deep Networks
One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects.
19: Making Photorealistic Images with GANs
A new approach to image generation and discrimination pits both processes against each other in a “generative adversarial network,” or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud.
20: Deep Learning for Speech Recognition
Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today’s sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers.
21: Inverse Reinforcement Learning from People
Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, “if you are a horse and like carrots, go to the carrot.” Inverse reinforcement looks at it like this: “if you see a horse go to the carrot, it might be because the horse likes carrots.”
22: Causal Inference Comes to Machine Learning
Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods—the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation.
23: The Unexpected Power of Over-Parameterization
Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses’ original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems.
24: Protecting Privacy within Machine Learning
Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack.
25: Mastering the Machine Learning Process
Finish the course with a lightning tour of meta-learning—algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field.