This video is an excerpt from a course I teach at Johns Hopkins University on Brain-Computer Interfacing. This excerpt is from a module on machine learning, and covers an overview of common methods like linear discriminant analysis (LDA), perceptrons & neural networks, convolutional neural networks, deep learning, support vector machines (SVMs), and a mention of nearest neighbor approaches. It stays high level (mostly avoids math) and focuses on properties and considerations for each method, pros/cons of each, and perhaps most importantly, some key recommendations for learning material and references that I've found to be the best-of-field during my 15+ year quest to understand all these techniques. Wanna learn how neural networks operate? Check out these refs! Same for Artificial Intelligence, SVMs, and more.