A0sanitycomp

joined 1 year ago
[–] A0sanitycomp@alien.top 1 points 11 months ago

Noromaid 20b and dolphin-mistral 7b do fairly well. Neither have much of a context length though.

[–] A0sanitycomp@alien.top 1 points 11 months ago (2 children)

Lol 😂 I’d need an upgrade

 

Set aside benchmarks, if you had to choose one to use instead of ChatGPT for the next 6 months, which one would you pick? Recently, I've been experiencing some extreme slow down and poor answers on GPT so I'm going to run a local backup for the time being to assist when GPT4 is down. I'm leaning towards Mistral. I can be convinced to test some others, though.

[–] A0sanitycomp@alien.top 1 points 11 months ago

What models are you using? I’ve had no luck with anything. Actually that orca-mini 3b is good at writing things matter-of-factly but it doesn’t go into great detail about anything.

[–] A0sanitycomp@alien.top 1 points 11 months ago

I’m not exactly a top tier programmer so anything I make is lucky to work. I would always consider using the best language for the job though given the resources so ya.

[–] A0sanitycomp@alien.top 1 points 11 months ago (2 children)

This is quite simple for me… I only know python and very small amounts of JavaScript/html/ and css. More important than efficiency gains is just me getting the job done which really is an efficiency gain in itself.

 

From what I’ve read mac somehow uses system ram and windows uses the gpu? It doesn’t make any sense to me. Any help appreciated.

[–] A0sanitycomp@alien.top 1 points 1 year ago

New to this. What does this part mean?

Model uses ChatML

<|im_start|>system

<|im_end|>

<|im_start|>user

How to plot my story?<|im_end|>

<|im_start|>assistant