So one thing that had really bothered me was that recent Arxiv paper claiming that despite GPT 3 being 175B, and GPT 4 being around 1.7T, somehow 3.5 Turbo was 20b.
This had been on my mind for the past couple of days because it just made no sense to me, so this evening I went to go check out the paper again, and noticed that I could not download the PDF or postscript. Then I saw this update comment on the Arxiv page, added yesterday:
Contains inappropriately sourced conjecture of OpenAI's ChatGPT parameter count from this http URL, a citation which was omitted. The authors do not have direct knowledge or verification of this information, and relied solely on this article, which may lead to public confusion
That link leads to a Forbes article, from before GPT 4 even released, that claims that ChatGPT in general is 20b parameters.
It seems like the chatbot application was one of the most popular ones, so ChatGPT came out first. ChatGPT is not just smaller (20 billion vs. 175 billion parameters) and therefore faster than GPT-3, but it is also more accurate than GPT-3 when solving conversational tasks—a perfect business case for a lower cost/better quality AI product.
So it would appear that they sourced that knowledge from Forbes, and after everyone got really confused they realized that it might not actually be correct, and the paper got modified.
So, before some wild urban legend forms that GPT 3.5 is 20b, just thought I'd mention that lol.
GPT 3.5 probably has more than 20b parameters, but then why is its API several times cheaper than text-davinci-003?
Although at the same time GPT 3.5 is good at facts and is great at creating text in many languages, while the opensource models are not always good even with English, because with 20b parameters it's hard to store much data, so there's probably a lot more than 20b
I agree. It's possible it's that small but I just think that's unlikely.
Probably heavily quantized and uses a smaller gpt-3 model.