this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi. So I am a bit new to NLP and ML as a whole and I am looking to create a text classification model. I have tried it with deBERTa and the results are decent(about 70%) but I need more accuracy. Are Generstive models a better alternative or should I stick to smaller models like Bert or maybe even non-NN classifiers and work on better dataset quality?

you are viewing a single comment's thread
view the rest of the comments
[–] sshh12@alien.top 1 points 10 months ago

+1, when in doubt, LLM it out.

You could also ask for explanations so when it gets it wrong, you can work on modifying your prompts/examples to get better performance.

Potentially you wouldn't want to do this if:

  • Your classification problem is very unusual/cannot be explained by a prompt
  • You want to be able to run this extremely fast or on a ton of data
  • You want to learn non-LLM deep learning/NLP (in which case I would've suggested basically some form of finetuning BERT)