sprectza

joined 10 months ago
[–] sprectza@alien.top 1 points 10 months ago

Yeah the responses are quite bad. I had high expectations after seeing the benchmarks.

 

Model: https://huggingface.co/Intel/neural-chat-7b-v3-1

It's based on Mistral 7b, fine tuned on SlimOrca. Also trained on a rather unique accelerator called Habana 8x Gaudi2. Numbers do look pretty interesting.

[–] sprectza@alien.top 1 points 10 months ago (1 children)

Yeah I think its MCTS reinforcement learning algorithm. I think DeepMind is the best lab when it comes to depeloping strategy and planning capable agents, given how good AlphaZero and AlphaGo is, and if they integrate it with the "Gemini" project, they really might just "ecliplse" GPT-4. I don't know how scalable it would be in terms of inference given the amount of compute required.

[–] sprectza@alien.top 1 points 10 months ago

Claude being empathetic.