this post was submitted on 01 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Not sure, but it seems they finetuned gpt-3.5-turbo-16k, which is faster than GPT-4, hence the claim of GPT-3.5 speed with 16K context limit.

They're dubiously naming it Phind V7. Also, they've ripped off WizardLM's code in the past and rebranded it to secure seed funding.

I doubt it's based on CodeLlama 34B. Unless they trained on a specific dataset that makes the model hallucinate as if it's GPT-3.5 Turbo.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] kristaller486@alien.top 1 points 1 year ago (1 children)

They trained their model using synthetic GPT-3.5-turbo data + a mix of their data. It is normal that V7 says "I am gpt-3.5", but it is not normal that Phind uses synthetic OpenAI GPT data because it violates OpenAI terms.

[โ€“] cuyler72@alien.top 1 points 1 year ago

OpenAI's terms only mean that they might ban your account if they catch you gathering it. The data itself is not copywritable in any way, OpenAi has no legal right to control its use.