The deep learning architectures and neuron types you have seen until now have been used to build larger and more complex networks. Moreover, there are a lot of new proposed architectures. This chapter will highlight a few advanced and extremely important architectures that you can go and read more about.
Autoencoders
One architecture that helps with dimensionality reduction is autoencoders. Autoencoders are networks that start with high numbers of neurons in the early layers, move towards a smaller number in the middle layers, and then go to a high number again towards the end layers. They learn to archive information and re-create it. Read more about them and try them out using the following tutorial here!
Attention
Another advanced architecture is attention. Attention layers have proven to be very powerful in machine translation tasks as well as question answering. Try them out using this tutorial here.
Deep Reinforcement Learning
A vital paradigm is deep reinforcement learning, where neural networks learn from interacting with the environment via trial and error and are rewarded based on their achievements. You can find an example here.
Generative Adversarial Networks
Another paradigm is generative adversarial networks - GANs. When using GANs, you train two networks: a network that generates examples, and one that tries to discriminate between the generated examples and the real ones. You can try it out with these examples here.
Transfer Learning
You’ve probably also noticed in this course that you kept re-initializing layers despite re-using them. An essential paradigm in deep learning is transfer learning. Models that have been trained on one problem are re-trained on others. One example is with image classification - large pre-trained models are selected, and only their last few layers are re-trained to predict new image classes. If you want to try it out, take a look at the Keras guide here.
Regression
You have been looking at classification problems until now, and we only addressed series predictions very little in the recurrent neural networks chapter. Neural networks have been employed in some regression scenarios as well, and if you’d like to try out a new example I recommend you look at this tutorial here.
Thank you for following through to the end of this chapter and the entire course! I hope you enjoyed it!