In deep learning models, the inputs to the network are processed using activation functions to generate the output corresponding to these inputs. Deep learning models are of particular importance in analyzing big data with numerous parameters and forecasting and are useful for image processing, natural language processing, object recognition, and financial forecasting. Also, in deep learning algorithms, activation functions have been developed by taking into account features such as performing the learning process in a healthy way, preventing excessive learning, increasing the accuracy performance, and reducing the computational cost. In this study, we present an overview of common and current activation functions used in deep learning algorithms. In the study, fixed and trainable activation functions are introduced. As fixed activation functions, sigmoid, hyperbolic tangent, ReLU, softplus and swish, and as trainable activation functions, LReLU, ELU, SELU and RSigELU are introduced.
Primary Language | English |
---|---|
Subjects | Engineering |
Journal Section | Articles |
Authors | |
Early Pub Date | December 30, 2021 |
Publication Date | December 31, 2021 |
Published in Issue | Year 2021 Volume: 10 Issue: 3 |
As of 2021, JNRS is licensed under a Creative Commons Attribution-NonCommercial 4.0 International Licence (CC BY-NC).