Also what can you do with the latest m3 max with 128gb ram , can anyone put it in to context by comparing it
troposfer
joined 1 year ago
Oh thanks in 5. You also answered 1 question in my mind, how to return back to words from floatin point numbers. Then now i understand they are created by specific embedding creator models. And I guess every result is different then other models result. So isn’t this so important like best embedding creator model and query creator model, which one is more successful right now now? And if i create an embedding in one creator model , i can’t create an embedding query with different embedding creator model to query my embedding?
This is interesting, you are saying like , you have embeddings on vector db , and you ask llm to give you some kind of sql query to search in vec db ?
These are stories, 1 man solve it all , just the right timing by the way with all this opanai saga