troposfer

joined 10 months ago
[–] troposfer@alien.top 1 points 10 months ago

These are stories, 1 man solve it all , just the right timing by the way with all this opanai saga

[–] troposfer@alien.top 1 points 10 months ago

Also what can you do with the latest m3 max with 128gb ram , can anyone put it in to context by comparing it

[–] troposfer@alien.top 1 points 10 months ago

Oh thanks in 5. You also answered 1 question in my mind, how to return back to words from floatin point numbers. Then now i understand they are created by specific embedding creator models. And I guess every result is different then other models result. So isn’t this so important like best embedding creator model and query creator model, which one is more successful right now now? And if i create an embedding in one creator model , i can’t create an embedding query with different embedding creator model to query my embedding?

[–] troposfer@alien.top 1 points 10 months ago (1 children)

This is interesting, you are saying like , you have embeddings on vector db , and you ask llm to give you some kind of sql query to search in vec db ?

 

I am confused about these 2 . Sometimes people use it interchangeably. Is it because rag is a method and where u store it should be vector db ? I remember before llms there was word2vec in the beginning ,before all of this llm. But isn’t the hard part to create such a meaningful word2vec , by the way word2vec is now called “embeddings” right?