this post was submitted on 19 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

CHAT GPT recommends the paper "Rectified Linear Units Improve Restricted Boltzmann Machines" by Vinod Nair and Geoffrey E. Hinton as it is one of the foundational papers introducing and exploring the benefits of ReLUs in neural networks. It also says it is a good starting point to learn about ReLUs and their advantages in machine learning models.

But, from your experience do you have any other papers or textbooks or even videos that you would recommend to someone learning about it? I don't mind if they're math heavy, as I do have a Bsc Honours in App Math.

Thanks!

top 4 comments
sorted by: hot top controversial new old
[–] OrionsTieClip@alien.top 1 points 11 months ago

ReLU itself is dead simple. Why it matters is more complicated

[–] bitemenow999@alien.top 1 points 11 months ago

Here is all you need to learn about relu:

relu = max(0,x)

[–] reverendCappuccino@alien.top 1 points 11 months ago

I don't think there are books specifically focused on that, and probably there's no need for it. Nonetheless, there's much information scattered throughout papers, but the fundamental concepts to keep in mind are not that many, imho. ReLU is piecewise linear, and the pieces are the two halves of its domain. In one half it is just zero, in the other ReLU(x)=x, so it is very easy and fast to compute. It is enough to make it nonlinear, hence allow powerful expressivity and make a neural network potentially a universal approximator. Many or most activations are nil and that sparsity is useful when it's not always the same set of unit having zero output. The drawbacks are related to the same characteristics: units may die (always output zero, never learning by backprop), there's a point (0) where the derivative is undefined even if the function is continuous, and there's no way to differentiate small and large negative values since they all result in a 0.

[–] d84-n1nj4@alien.top 1 points 11 months ago

I believe it was first created in “Cognitron: A self organizing multilayered neural network”, but was not referred to as ReLU. It was popularized by “Deep Sparse Rectifier Neural Networks” and “Rectified Linear Units Improve Restricted Boltzmann Machines”.

In regard to deep learning and GPU use: It’s efficient compared to other activation functions because it consists of comparison and thresholding operations, and the derivative is just 1 when positive and 0 if not (for backpropagation). It’s effective because it adds non-linearity to layers of linear operations like the convolution.