Is the web version of ChatGPT 128k, or just via the api?
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
Meta's lm-infinite has similar attention properties.
Their needle in a haystack test isn't very compelling. Sure no test is flawless but a random out of context fact placed at different points in the context window there is a lot of reasons why the model would fail to retrieve that.
So what are the implications in real day useage?
-
It's able to retrieve every information from at least 65k if it's small enough.
-
What are the results with bigger chunks to be retrieved?
-
Is it able to process all of the 64k tokens in order to generate an answer that takes all the 64k into account.
For sure it's interesting but many more test are needed to be done to have a full picture of the real capabilities.
Someone compared that with Claude 2 100K?
Also, gpt4 32K have same 100% accuracy in all its context? Is that 64 on 180 "absolute" or relative?
- If the fact was at the beginning of the document, it was recalled regardless of context length
Lol at OpenAI adding a cheap trick like this, since they know the first thing people will test at high context lengths is recall from the beginning.