this post was submitted on 29 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I have been playing with llms for novel writing. Thus far all I have been able to use them for is brainstorming. No matter the model I use the prose feels wooden, dull, and obviously AI.

Is anyone else doing this? Are there particular models that work really well or any prompts you recommend? Any workflow advice you have to better leverage llms in any way would be very appreciated!

you are viewing a single comment's thread
view the rest of the comments
[–] thereisonlythedance@alien.top 1 points 9 months ago (3 children)

Out of the box, I actually find the vanilla Llama-2 70b chat model produces the most natural prose, if prompted correctly. Long Alpaca 70b is also good at following style if you feed it a chunk of writing.

But the best results I’ve had have come from fine-tuning Mistral 7B myself. Mistral writes crazy good if trained right, though can get muddled at longer contexts.

[–] AstronomerChance5093@alien.top 1 points 9 months ago (2 children)

would you mind going into more detail in your fine tuning methods? your dataset, how it's structured etc. I'm trying to get something similar going with mistral atm, but not having much luck getting anything good out of it.

[–] thereisonlythedance@alien.top 1 points 9 months ago (1 children)

Sure.

I'm using an instruct style dataset with a system field (in Axolotl I use either the orcamini dataset type or chatml). I've then collated a bunch of writing that I like (up to 4096 tokens in length) and then reverse prompted it in an LLM to create instructions. So, for example, one sample might have a system field that is "You are a professional author with a raw, visceral writing style" or "You are an AI built for storytelling." Then the instruction might be "write a short story about X that touches on themes of Y and Z, write in the style of W." Or the instruction might be a more detailed template, setting out genre, plot, characters, scene description, POV, etc. Then the response is the actual piece. My dataset also includes some contemporary non-rhyming poetry, some editing/rephrasing samples, and some literary analysis.

I have three datasets. A small one that is purely top quality writing in a dataset structured as above, a middle sized one that also works in some fiction-focused synthetic GPT-4 data I've generated myself and curated from other datasets, and a larger one that also incorporates conversational responses derived from a dataset that is entirely Claude generated.

I've then run a full fine-tune on Mistral with those datasets using Axolotl on RunPod, using either 2 or 3 A100s.

I find utilising a system prompt very beneficial -- it seems to help build associative connections.

Overall results have been pretty good. The larger dataset model is a great all round writer and still generalises well. The smaller dataset model produces writing that is literary, verbose, and pretty.

I've also had some success training on Zephyr as a base model. It helps to give underlying structure and coherence. Finding the right balance of writing pretty and long, with enough underlying reasoning to sustain coherence has been the key challenge for me.

[–] AstronomerChance5093@alien.top 1 points 9 months ago

Thank you for such a detailed response - really helpful!