# Training the Network

**00:00**
In this lesson, you will train a model. Again, when you train a model, you gradually adjust the weights and bias. Add a new method `.train()`

to the `NeuralNetwork`

class.

**00:12**
It takes a list of input vectors and corresponding targets. The final parameter is the number of training iterations. For each iteration, a random input vector will be selected.

**00:24**
This vector will be used to compute the gradients and update the weights and bias.

**00:29**
Every 100 iterations, it will make predictions for each of the input vectors and sum the errors, storing them in a list. And that list is returned.

**00:41**
Let’s try out the new `.train()`

method. You can create a 2D NumPy array of input vectors and then a single vector of targets. Create a new instance of the `NeuralNetwork`

class and set the learning rate to `0.1`

.

**00:56**
This time, instead of calling the `.predict()`

method, call the `.train()`

method. Give it the input vectors and targets and have it run for 10,000 iterations.

**01:06**
This will return 100 cumulative errors that can then be graphed using Matplotlib.

**01:13**
You’ll see that while it’s not a smooth curve, the error rate decreases rapidly and then settles into a fairly consistent range. That range is wider because of the dataset size.

**01:25**
There isn’t much for the network to sink its teeth into. Also, you’re testing the dataset using the same data it was trained on, which can lead to overfitting. A real-world deep neural network would be designed to work with much larger datasets, and also it would have many more layers. As layers and activation functions are added to the network, it increases the accuracy of the predictions generated from the trained model.

**01:50**
Neural networks are best suited for problems with complex data such as object detection. In the last lesson of this course, let’s review what you’ve learned.

Become a Member to join the conversation.