Deep learning neural networks have become easy to create. However, tuning these models for maximum performance remains something of a challenge for most modelers. This course will teach you how to get results as a machine learning practitioner.
The course starts with an introduction to the problem of overfitting and a tour of regularization techniques. Learn through better configured stochastic gradient descent batch size, loss functions, learning rates, and to avoid exploding gradients via gradient clipping. After that, you’ll learn regularization techniques and reduce overfitting by updating the loss function using techniques such as weight regularization, weight constraints, and activation regularization. Post that, you’ll effectively apply dropout, the addition of noise, and early stopping, and combine the predictions from multiple models.
You’ll also look at ensemble learning techniques and diagnose poor model training and problems such as premature convergence and accelerate the model training process. Then, you’ll combine the predictions from multiple models saved during a single training run with techniques such as horizontal ensembles and snapshot ensembles.
Finally, you’ll diagnose high variance in a final model and improve the average predictive skill.
By the end of this course, you’ll learn different techniques for getting better results with deep learning models.
All the resource files are uploaded on the GitHub repository at https://github.com/PacktPublishing/Performance-Tuning-Deep-Learning-Models-Master-Class
This course is for developers, machine learning engineers, and data scientists that want to enhance the performance of their deep learning models. This is an intermediate level to advanced level course. It's highly recommended that the learner be proficient in Python, Keras, and machine learning.
A solid foundation in machine learning, deep learning, and Python is required to get better results out of this course. You are also recommended to have the core machine learning libraries in Python.