Deep Learning Visuals

Over 200 figures and diagrams of the most popular deep learning architectures and layers FREE TO USE in your blog posts, slides, presentations, or papers.

Home | Activation Functions | Architectures | Assorted | Attention | Batch Norm | BERT | Classification | Convolutions | Decoder | Dropout | ELMo | Encoder | FFN | Gradient Descent | Initializations and Clipping | Layer Norm | Optimizers and Schedulers | Patch Embeddings | Positional Encoding | RNNs | Seq2Seq | Transformers


Activation Functions

Shield: CC BY 4.0

These images were originally published in the book “Deep Learning with PyTorch Step-by-Step: A Beginner’s Guide”.

They are also available at the book’s official repository: https://github.com/dvgodoy/PyTorchStepByStep.

Index

** CLICK ON THE IMAGES FOR FULL SIZE **

Functions

Sigmoid

Source: Chapter 4

Tanh

Source: Chapter 4

ReLU

Source: Chapter 4

Leaky ReLU

Source: Chapter 4

Parametric ReLU (PReLU)

Source: Chapter 4

Transformed Feature Spaces

Single Hidden Layer

Source: Chapter Bonus

Source: Chapter Bonus

Transforming with Sigmoid

Source: Chapter Bonus

Source: Chapter Bonus

Transforming with Tanh

Source: Chapter Bonus

Transforming with ReLU

Source: Chapter Bonus

Transforming with PReLU

Source: Chapter Bonus

Two Hidden Layers

Source: Chapter Bonus

Transforming Twice with Tanh

Source: Chapter Bonus

Transforming Twice with PReLU

Source: Chapter Bonus

Matrix Operations

Without Activation Functions

Source: Chapter 4

With Activation Functions

Source: Chapter 4

This work is licensed under a Creative Commons Attribution 4.0 International License.

CC BY 4.0