this post was submitted on 01 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.

They're dubiously naming it Phind V7. Also, they've ripped off WizardLM's code in the past and rebranded it to secure seed funding.

I doubt it's based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it's GPT-3.5 Turbo.

you are viewing a single comment's thread
view the rest of the comments
[–] api@alien.top 1 points 1 year ago

If the training data contains statements to the effect that the model was extracted from the brain of a living walrus, that's what it will tell you when you ask where it came from. These things aren't self-aware in any sense. They don't contemplate themselves or ask "who am I?"