coumineol

joined 11 months ago
[–] coumineol@alien.top 1 points 9 months ago (1 children)

https://arxiv.org/abs/2106.04554

If you're trying to learn more about language models don't bother with anything written before 2020. That's basically the Stone Age.

[–] coumineol@alien.top 1 points 9 months ago

Didn't mean to say those papers are completely useless, but even for those with a strong Math/ML background I would advise starting with recent survey papers. Reading "Attention is All You Need" is kind of like reading the General Relativity papers of Einstein - cool as a historical curiosity, but not ideal for optimizing expertise acquisition.

[–] coumineol@alien.top 1 points 9 months ago (5 children)

Not to start an argument here but I can't imagine anybody with any level of understanding who should start diving deeper by reading the "Attention is All You Need" paper. Yes, this is a diverse community, but when you try to address everybody's needs, you usually end up with addressing nobody's needs.

[–] coumineol@alien.top 1 points 9 months ago (7 children)

Thanks but here's the problem with this list: most of the papers mentioned are on a very high technical level, and people who would be able to understand them are probably people who have already read them. Note that Andrej was careful to keep the material at a certain level because he addresses those who want to go one step further than talking to ChatGPT, without necessarily understanding all the underlying theory.

[–] coumineol@alien.top 1 points 10 months ago
[–] coumineol@alien.top 1 points 10 months ago (2 children)
[–] coumineol@alien.top 1 points 10 months ago

Is that AGI?

 

Specifically asking for RAG applications. I'll appreciate any tips about the current best practices regarding complex document retrieval in a low-resource language. Thanks.

 

ICLR 2024 papers are available to access along with the comments & scores of the reviewers here. Here I'm sharing a list of those which I think this community may find useful. For this I did some filtering using keywords, then setting a score threshold using a modified weighted Bayesian averaging. I skimmed the remaining 100 papers and read the more interesting ones. Note that this is not necessarily a list of "best papers" as it leans toward those that are simple rather than complex, practical rather than theoretical, and incremental rather than groundbreaking. Anyway here they are:

  1. Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing: Nice and simple method to cost-optimize your pipeline by using an expensive LLM only when necessary, and going with your small one when not. It's pretty modular as you can enable and disable it whenever you want, and choose how frequently you want to route on which.

  2. ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs: Presents a dataset and a benchmark to increase and measure the capabilities of LLMs as agents, especially smaller ones with cool results.

  3. Re-Reading Improves Reasoning in Language Models: Very simple approach by repetition of the prompt, with surprisingly good results.

  4. CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing: Presents a framework where a more accurate output can be obtained via a verification-correction cycle, using external sources such as websites and knowledge bases.

  5. SuRe: Improving Open-domain Question Answering of LLMs via Summarized Retrieval: Aims to make the models give more useful & accurate responses to the questions by creating multiple candidate answers and comparing them pairwise. Looks a bit slow potentially but provides meaningful improvements.

  6. Large Language Models Are Not Robust Multiple Choice Selectors: Improving multiple choice selection is one of the areas that I find to be the most interesting. This paper makes a contribution by providing a simple way to mitigate the well-known "token bias" in those prompts.

  7. ToolChain*: Efficient Action Space Navigation in Large Language Models with A* Search: Adapts the famous A* search algorithm to help LLMs plan more robust solutions to complicated real-life problems. Results look like a significant improvement over similar ones such as ReAct.

  8. LoftQ: LoRA-Fine-Tuning-aware Quantization for Large Language Models: Now I'm not extremely knowledgeable about the technical aspects of fine tuning TBH, but this paper has stellar reviews and claims a significant improvement in terms of decreasing model size with minimal degradation in performance, so I'll just include it.

  9. OctoPack: Instruction Tuning Code Large Language Models: Presents a dataset to finetune models for coding assistance, also finetunes 16b model and shares the results, which seem to be better than anything other than GPT-4 (which are admittedly still a far cry from GPT-4, but baby steps I guess).

  10. BooookScore: A systematic exploration of book-length summarization in the era of LLMs: Has very high scores, and IMHO justifiably so. Provides a carefully crafted evaluation metric for summarization that seems to greatly reduce the need for manual evaluation by humans.

Apart from those there are several great-looking papers that I excluded due to the practicality bias, but they may be the topic of another post on another sub. Let me know if you find any of the above especially useful!