Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

This lesson is for members only. Join us and get access to thousands of tutorials and a community of expert Pythonistas.

Unlock This Lesson

Hint: You can adjust the default video playback speed in your account settings.
Hint: You can set your subtitle preferences in your account settings.
Sorry! Looks like there’s an issue with video playback 🙁 This might be due to a temporary outage or because of a configuration issue with your browser. Please refer to our video player troubleshooting guide for assistance.

Spotlighting PyTorch Special Features

00:00 Now you’ll be exploring some special and more technical features of PyTorch. PyTorch adds a C++ module for auto differentiation to the torch backend.

00:12 What is auto differentiation? You might ask. Imagine you are walking through a hilly area, searching for the lowest spot. Think of the hill as a machine learning function, you need to fine-tune the lowest spot.

00:27 Is your goal the best setting for your function, meaning it has the best accuracy? Auto differentiation is like a high-tech compass. Instead of pointing north, it shows the quickest way downhill.

00:42 In machine learning, it’s guiding you on how to adjust your settings to get the best results. This compass makes tiny adjustments as you move, ensuring you are always heading downhill.

00:55 Similarly, auto differentiation tweaks your machine learning model at each step, leading you straight to the best solution. This module in PyTorch is a big deal because it makes it more convenient to find the best settings for your model since it’s built-in and fast.

01:14 PyTorch, just like newer versions of TensorFlow, uses eager mode execution, which means you can debug your code just like a normal Python script.

01:25 Let’s have a look at PyTorch’s ecosystem. The fastai API makes it extremely straightforward to build models quickly. TorchServe is an open-source model server developed in collaboration between AWS and Meta. TorchServe is a tool specifically designed for serving PyTorch models in production environments.

01:49 It simplifies the process of deploying PyTorch models and provides several key functionalities that are essential for a smooth and efficient deployment.

01:59 TorchElastic is for training deep neural networks at scale using Kubernetes.

02:06 And finally, PyTorch Hub is an active community for sharing and extending cutting-edge models.

02:14 You did it. You just learned a great deal about PyTorch.

Become a Member to join the conversation.