Course description
Statistical Learning
This is an introductory-level course in supervised learning, with a focus on regression and classification methods. The syllabus includes: linear and polynomial regression, logistic regression and linear discriminant analysis; cross-validation and the bootstrap, model selection and regularization methods (ridge and lasso); nonlinear models, splines and generalized additive models; tree-based methods, random forests and boosting; support-vector machines; neural networks and deep learning; survival models; multiple testing. Some unsupervised learning methods are discussed: principal components and clustering (k-means and hierarchical).
This is not a math-heavy class, so we try and describe the methods without heavy reliance on formulas and complex mathematics. We focus on what we consider to be the important elements of modern data science. Computing is done in R. There are lectures devoted to R, giving tutorials from the ground up, and progressing with more detailed sessions that implement the techniques in each chapter.
Upcoming start dates
Suitability - Who should attend?
Prerequisites
First courses in statistics, linear algebra, and computing.
Outcome / Qualification etc.
What you'll learn
- Overview of statistical learning
- Linear regression
- Classification
- Resampling methods
- Linear model selection and regularization
- Moving beyond linearity
- Tree-based methods
- Support vector machines
- Deep learning
- Survival modeling
- Unsupervised learning
- Multiple testing
Course delivery details
This course is offered through Stanford University, a partner institute of EdX.
3-5 hours per week
Expenses
- Verified Track -$149
- Audit Track - Free