Ok_Relationship_9879

joined 1 year ago

That's pretty amazing. Thanks for all your hard work!
Does anyone know if the Nous Capybara 34B is uncensored?

 

This fella tested the new 128K context window and had some interesting findings.

* GPT-4’s recall performance started to degrade above 73K tokens

* Low recall performance was correlated when the fact to be recalled was placed between at 7%-50% document depth

* If the fact was at the beginning of the document, it was recalled regardless of context length

Any thoughts on what OpenAI is doing to its context window behind the scenes? Which process or processes they're using to expand context window, for example.

He also says in the comments that at 64K and lower, retrieval was 100%. That's pretty impressive.

https://x.com/GregKamradt/status/1722386725635580292?s=20

[–] Ok_Relationship_9879@alien.top 1 points 1 year ago (1 children)

Which models do you find to be good at 16k context for story writing?