this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Just wanted to share :)

So my initial though was how so many people are shocked with Dall-E and GPT integration, and people don't even realize its possible locally for free, yeah maybe not as polished as GPT, but still amazing.

And if you take into consideration all of the censorship of openai, it's just better even if it can't do crazy complicated prompts.

So i created this character for SillyTavern - Chub
And using oogabooga + SillyTavern + Automatic1111 to generate the prompt itself and the image automatically.

I can also ask to change something and the chatbot adjust the original prompt accordingly.

Did any of you did anything simillar? what are your thoughts ?

https://preview.redd.it/sltfe9osf30c1.png?width=1246&format=png&auto=webp&s=89f9490c81f4759ca35856e5b19c237b791fd647

you are viewing a single comment's thread
view the rest of the comments
[โ€“] iChrist@alien.top 1 points 10 months ago (1 children)

Why do you need 70b? for prompting SD?

I found that for good prompts even mistral 7b does the job good!

You dont need 3 GPU's to run it all, I do it on 3090

I just installed TensorRT which improves the speeds by a big margin (automatic1111)

I generate 1024x1024 30step image in 3.5 secs instead of 9

[โ€“] a_beautiful_rhind@alien.top 1 points 10 months ago

I use the 70b to chat and it also prompts SD during the convo. I agree for just SD you can use almost any LLM model.

IME, TensorRT didn't help. Just shaved a second off. I also tried the vlad version (diffusers) and to compile the model. If I use the 3090 I get somewhere around 6 seconds for 1024x1024 and I found that XL doesn't do as good for smaller images.

In chat and not serious SD, even 576x576 is "enough" on this 1080P laptop. On the P40 that takes 12 seconds.

Ideally for actual SD, I will try comfyUI at some point. AFAIK, it's the only UI that does XL properly; where the latent image is passed to the refiner model. Probably why my XL outputs don't look much better than good 1.5 models.