this post was submitted on 25 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

hi everyone, I could not understand answer. I found 1/4 from table. answer is 1/2. thanks everyone.

https://preview.redd.it/tht330mp7h2c1.png?width=939&format=png&auto=webp&s=2084932c8e79929563ddc757f012fce0c040453c

top 12 comments
sorted by: hot top controversial new old
[–] Terrible_Button_1763@alien.top 1 points 11 months ago

The interesting challenge is trying to figure out how you solved the problem to get 1/4 instead of 1/2. In Bayesian thinking, you have the prior and posterior. The prior (before you see the evidence that a = 1, b =1, and c = 0) is the K column by itself. P(K = 1) is 1/2 as there are 4 ones and 4 zeros.

Now the posterior is evaluated with respect to the prior. In Naive bayes, it is the case that the pieces of evidence are viewed independently (naively) from each other. So P(K = 1 | a = 1 and b = 1 and c = 0) is simplified as P(K = 1) P(a = 1 and b = 1 and c = 0 | K = 1) / P(a = 1 and b = 1 and c = 0). The numerator simplifies to 1/2 * P(a = 1 | K = 1) * P(b = 1 | K = 1) * P(c = 0 | K = 1) = 1/2 * 1/2 * 1/4 * 1/2.

The denominator is again, challenging. If you calculate it like you should (not-naively), it should equal P(a = 1 and b = 1 and c = 0). But the problem becomes that... that will make all the probabilities over the conditional distribution sum up to 1 if you are to calculate it non-naively (i.e., *not* assuming P(a = 1 and b = 1 and c = 0 | K = 1) = P(a = 1| K = 1) * P(b = 1 | K = 1) * P(C = 0 | K = 1)).

The way the solution is calculating it sidesteps this issue by expressing P(a = 1 and b = 1 and c = 0) in such a way that is amenable to Naive Bayes. Think about this further.

[–] mofoss@alien.top 1 points 11 months ago (2 children)

P(K=1) = 1/2

P(a=1|K=1) = P(a=1,K=1)/P(K=1) = (1/4)/(1/2)=1/2

P(b=1|K=1) = P(b=1,K=1)/P(K=1) = (1/8)/(1/2)=1/4

P(c=0|K=1) = P(c=0, K=1)/P(K=1) = (1/4)/(1/2)=1/2

P(a=1, b=1, c=0, K=1) = 0

P(a=1, b=1, c=0, K=0) = 1/8

[0.5 * 0.25 * 0.5] / (0 + 1/8) = (1/16) / (1/8) = 1/2

For conditionals, convert it into joints and priors first and THEN use the table to count instances out of N samples.

P(X|Y) = P(X,Y)/P(Y)

:)

[–] Terrible_Button_1763@alien.top 1 points 11 months ago (2 children)

At the very least your calculation does not agree with your formula of P(X|Y) = P(X,Y)/P(Y).

How is the numerator a calculation of P(X,Y)? [0.5 * 0.25 * 0.5] is P(a = 1 | K =1 ) * P(b = 1 | K = 1) * P(c = 0 | K = 1) which is (in Naive Bayes) P(X|Y) by Naive Bayes and not P(X, Y).

[–] mofoss@alien.top 1 points 11 months ago (1 children)

Uh not sure what Fubini's theorem is, I just use the equivalence of P(X|Y)P(Y) = P(Y|X)P(X) = P(X,Y)

[–] Terrible_Button_1763@alien.top 1 points 11 months ago (1 children)

That's not what the questions is asking. And that's not Bayes' rule. The denominator is not even calculating P(Y) under Naive Bayes.

Hmm, maybe machine learning is not just import tensorflow/pytorch/llm.

[–] mofoss@alien.top 1 points 11 months ago (1 children)

Features are independent when conditioned on the dependent is pretty much what I know about Naive Bayes, I personally don't care for the semantics.

Also the last time I was using naive bayes was grad school 7 years ago so things are fuzzy, sorry

[–] Terrible_Button_1763@alien.top 1 points 11 months ago

Save it for your next submission, friend.

[–] mofoss@alien.top 1 points 11 months ago (1 children)

Oh wait I made a typo, OP ignore my answer 😅

[–] mofoss@alien.top 1 points 11 months ago

(1/32)/(2/32)

[–] Kruki37@alien.top 1 points 11 months ago (1 children)

Seems like you dropped one of the 1/2s from the numerator. Maybe I’m missing something but the answer looks like 1/4 to me as your workings show

[–] Terrible_Button_1763@alien.top 1 points 11 months ago

I rate your answer 🌶️🌶️🌶️🌶️🌶️ / this dumpster fire.

[–] koolaidman123@alien.top 1 points 11 months ago

/r/learnmachinelearning