FallMindless3563

joined 10 months ago
[–] FallMindless3563@alien.top 1 points 9 months ago

The only book he explicitly mentions is "Thinking Fast and Slow" by Daniel Kahneman, but I think there are a ton of books that would be great resources along side the papers. I just happened to pull a lot of the papers from the footnotes and concepts he mentioned.

[–] FallMindless3563@alien.top 1 points 9 months ago

You certainly can combine all the tasks and datasets into a single instruction fine tuning dataset. Then you would have a separate dataset for the reinforcement learning half where the model is learning human preferences.

 

I loved Andrej’s talk about in his “Busy person’s intro to Large Language Models” video, so I decided to create a reading list to dive in deeper to a lot of the topics. I feel like he did a great job of describing the state of the art for anyone from an ML Researcher to any engineer who is interested in learning more.

The full talk can be found here: https://youtu.be/zjkBMFhNj_g?si=fPvPyOVmV-FCTFEx

Here’s the reading list: https://blog.oxen.ai/reading-list-for-andrej-karpathys-intro-to-large-language-models-video/

Let me know if you have any other papers you would add!

[–] FallMindless3563@alien.top 1 points 10 months ago

I’d like to think we dove deep, but let me know!