ReLU itself is dead simple. Why it matters is more complicated
Machine Learning
Community Rules:
- Be nice. No offensive behavior, insults or attacks: we encourage a diverse community in which members feel safe and have a voice.
- Make your post clear and comprehensive: posts that lack insight or effort will be removed. (ex: questions which are easily googled)
- Beginner or career related questions go elsewhere. This community is focused in discussion of research and new projects that advance the state-of-the-art.
- Limit self-promotion. Comments and posts should be first and foremost about topics of interest to ML observers and practitioners. Limited self-promotion is tolerated, but the sub is not here as merely a source for free advertisement. Such posts will be removed at the discretion of the mods.
Here is all you need to learn about relu:
relu = max(0,x)
I don't think there are books specifically focused on that, and probably there's no need for it. Nonetheless, there's much information scattered throughout papers, but the fundamental concepts to keep in mind are not that many, imho. ReLU is piecewise linear, and the pieces are the two halves of its domain. In one half it is just zero, in the other ReLU(x)=x, so it is very easy and fast to compute. It is enough to make it nonlinear, hence allow powerful expressivity and make a neural network potentially a universal approximator. Many or most activations are nil and that sparsity is useful when it's not always the same set of unit having zero output. The drawbacks are related to the same characteristics: units may die (always output zero, never learning by backprop), there's a point (0) where the derivative is undefined even if the function is continuous, and there's no way to differentiate small and large negative values since they all result in a 0.
I believe it was first created in “Cognitron: A self organizing multilayered neural network”, but was not referred to as ReLU. It was popularized by “Deep Sparse Rectifier Neural Networks” and “Rectified Linear Units Improve Restricted Boltzmann Machines”.
In regard to deep learning and GPU use: It’s efficient compared to other activation functions because it consists of comparison and thresholding operations, and the derivative is just 1 when positive and 0 if not (for backpropagation). It’s effective because it adds non-linearity to layers of linear operations like the convolution.