Scary-Knowledgable
Access to powerful, open-source LLMs has also inspired a community devoted to refining the accuracy of these models, as well as reducing the computation required to run them. This vibrant community is active on the Hugging Face Open LLM Leaderboard, which is updated often with the latest top-performing models.
That's a nice indirect shout out.
The stages of learning
- 1 Unconscious incompetence - you don't know that you don't know.
- 2 Conscious incompetence - you try something and it doesn't work.
- 3 Conscious competence - you get to the point that things work and you understand how you did it.
- 4 Unconscious competence - you can do things without having to consciously think it through, you can enter a flow state.
MS is a lot more than just their OpenAI investment, shorting MS based upon OpenAI exclusively seems not to be the best idea.
And we find the backend is just Mechanical Turk.
He spoke at the Cambridge Union to receive the Hawking Fellowship on 1st of November, from the talk the allegations sound like a lot of BS, it's a shame I can't short their stock - https://www.youtube.com/watch?v=NjpNG0CJRMM
I use oobabooga, I'm actually testing out using it with Language Agent Tree Search to see if it can make better outputs -https://github.com/andyz245/LanguageAgentTreeSearch
Deepseek Coder 33B worked well for me, I asked it to make the game snake and it did it the first time with the 4bit GPTQ - https://huggingface.co/TheBloke/deepseek-coder-33B-instruct-GPTQ
Other models are available to run on CPU/GPU - https://huggingface.co/models?search=deepseek%2033b
https://github.com/khaimt/qa_expert