this post was submitted on 12 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi. So I am a bit new to NLP and ML as a whole and I am looking to create a text classification model. I have tried it with deBERTa and the results are decent(about 70%) but I need more accuracy. Are Generstive models a better alternative or should I stick to smaller models like Bert or maybe even non-NN classifiers and work on better dataset quality?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] phree_radical@alien.top 1 points 10 months ago (2 children)

Maybe it's overkill, idk, but if you want higher accuracy, it's an option

You can just list examples from your dataset and let the LLM complete the last one

Example:

# Classify text

(a) advertisement
(b) poetry
(c) information

Ignore real-time Al and customers will do the same to you. Our vector database is AI-ready and proven at scale.
Class: (a)

I find no peace, and all my war is done. I fear and hope. I burn and freeze like ice. I fly above the wind, yet can I not arise
Class: (b)

YOUR BEST COMES OUT OF THE BLUE. EXPLORE BOISE STATE
Class: (a)

Two-month ramp closure: northbound OR 99W onto OR 217 north Starts May 31 of Transportation Oregon Department OR 217 AUXILIARY LANES
Class: (c)

Staying healthy. Staying active. We have it all right here. IN YOUR PRIME LEARN MORE LIVING YOUR BEST LIFE
Class: (a)

Go further, FASTER. Take the world's premier English- proficiency test in less than 2 hours!
Class: (a)

A rhinoceros beetle is a living thing. Rhinoceros beetles grow and respond to their environment. They need food and water.
Class: (c)

Our vice runs beyond all that old men saw, And far authentically above our laws, And scorning virtues safe and golden mean, Sits uncontrolled upon the high extreme.
Class: (b)

{your text here}
Class: ({generate one token}

I don't know about the task you have in mind specifically, but you can do just about anything with a 13B llama model. Picking a fine-tune doesn't matter if you use examples instead of instructions. 7B Mistral seems to do fine with this example (even GPT2 can do some classification), but in-context learning is remarkably better at 13B, picking up a lot more nuance

[โ€“] Shoddy_Vegetable_115@alien.top 1 points 10 months ago

My classification task is to classify a given essay into AI generated and human generated. And I need the answer to be between 0 and 1(both included) with 1 being AI generated and 0 being human generated.

Few-shot examples is a good idea for most classification tasks but I don't think Generative LLMs can understand the more intricate semantic patterns to differentiate between the AI and human generated with just a few examples but I'll try it once and let you know!

Btw do you think fine-tuning would be better?

load more comments (1 replies)