Tobberone

joined 2 years ago
[–] Tobberone@lemm.ee 10 points 8 months ago

If a magazine that doesn't usually cover cars suddenly covers cars, my reaction isn't "this must be great". It's "how much did that plug cost then?"

[–] Tobberone@lemm.ee 1 points 8 months ago

There must be. Recall and info sec is mutually excluding by definition!

[–] Tobberone@lemm.ee 2 points 8 months ago

I'm just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.

I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.

[–] Tobberone@lemm.ee 5 points 8 months ago

And exactly why are they missing? Who stole what at Microsoft?

[–] Tobberone@lemm.ee 2 points 8 months ago (2 children)

Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.

[–] Tobberone@lemm.ee 1 points 8 months ago

This is expected. Oil prize has been on the decline for some time. I didn't expect demand to erode this fast, though. which I guess is kinda a good thing.

The only way forward is for renewables to become even cheaper that fossils. Which can be done. The EUs fit-for-55 will bring down energy prizes. Summertime we will see really low electricity prizes the comming decade in Europe because of this.

[–] Tobberone@lemm.ee 1 points 8 months ago (1 children)

I need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?

[–] Tobberone@lemm.ee 1 points 8 months ago (1 children)

Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?

[–] Tobberone@lemm.ee 2 points 8 months ago (1 children)

Thank you for your detailed answer:) it's 20 years and 2 kids since I last tried my hand at reading code, but I'm doing my best to catch up😊 Context window is a concept I picked up from your links which has provided me much help!

[–] Tobberone@lemm.ee 1 points 8 months ago (3 children)

The problem I keep running into with that approach is that only the last page is actually summarised and some of the texts are... Longer.

[–] Tobberone@lemm.ee 6 points 8 months ago (15 children)

Do you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I've tried to set it up, but the documents provided doesn't seem to be analysed properly.

I'm trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.

[–] Tobberone@lemm.ee 0 points 8 months ago

I couldnt disagree more with you. If there are pedestrians nearby you drive slow and keep your distance regardless of where you drive.

The same goes for pedestrians, though. Don't walk where it's not safe, for everyones safety. Like the interstate. It's a shared responsibility.

This, however, is in the middle of a neighborhood where a ball and a kid could come flying at moments notice...

view more: ‹ prev next ›