this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

I recently started using the base model of LLaMA-2-70B for creative writing and surprisingly found most of my prompts from ChatGPT actually works for the "base model" too, suggesting it might also be fine tuned a bit on ChatGPT-like instructions.

Curious anyone tried both llama 1 & 2 base model and can share their experiences on creativity ? My hunch is llama 1 might be slightly better at it, assuming it hasn't go through as much alignment.

you are viewing a single comment's thread
view the rest of the comments
[–] ithkuil@alien.top 1 points 11 months ago (2 children)

I think it's best to keep temperature low and feed the randomness in manually. Like generate a list of words and ask it to make an association for each and use those as inspiration for characters, plot, whatever. Or make a list of options for each thing that makes sense and have a model generate Python code to randomly select one of each, and then put those random selections in the prompt.

Because the temperature being much higher than zero mostly makes it dumber in my experience.

[–] nuvalab@alien.top 1 points 11 months ago (1 children)

That's an interesting idea .. in my experience anything <1 works, >1.2 goes wild and for things we expect to be a bit more deterministic, setting it to 0 is preferred.

What's your best setup and temperature for creative writing ?

[–] ithkuil@alien.top 1 points 11 months ago

Same answer actually, for creative writing. 0