this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Some of the bigger/better models make me think local is doing pretty well and it is at chat, but exploring data cleaning has taken a bit of wind out of my sail.

Not having much luck with the ones I've tried (think 34B Q5 of various flavours - all the usual suspects).

Say I've got a paragraph about something and the text block contains some other unrelated comment. Let's say "subscribe to our news letter" in it or some other web scraping artifact. I'd like to give the LLM an instruction to filter out content not related to the paragraph topic.

Local LLMs...mostly failing. GPT3.5...failing I'd say 40% of the time. GPT4...usually works...call it 90.

That's not entirely surprising, but the degree to which locals are failing at this task relative to closed is frustrating me a bit.

Hell for some 34Bs I can't even get the local ones to surpress the opening

Here's the cleaned article:

...when the prompt literally says word for word don't include that. Are there specific LLMs for this? Or is my prompting just bad?

You are an expert at data cleaning. Given a piece of text you clean it up by removing artifacts left over from webscraping. Remove anything that doesn't seem related to the topic of the article. For example you must remove links to external sites, image descriptions, suggestions to read other articles etc. Clean it up. Remove sentences that are not primarily in English. Keep the majority of the article. The article is between the [START] and [END] marker. Don't include [START] or [END] in your response. It is important that there is no additional explanation or narrative added - just respond with the cleaned article. Do not start your response with "Here's the cleaned article:"

Unrelated - openai guidance says use """ as markers not the start/end I've got. Anybody know if that is true for locals?

you are viewing a single comment's thread
view the rest of the comments
[–] Small-Fall-6500@alien.top 1 points 9 months ago

Having a dozen or so examples doesn’t help/work?

I don’t see you mentioning this (or any other comments here) but few-shot prompting should help immensely compared to only giving a detailed instruction. If you add at least a couple examples after the instruction, I would imagine most models would do much better.

(Assuming you can fit multiple examples in the context window)