this post was submitted on 30 Oct 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Can't say I've seen any post discussing game dev with LLM + grammar, so here I go.

Grammar makes the LLM response parsable by code. You can easily construct a json template that all responses from the LLM must abide to.

On a per character basis during RP, the following grammar properties are useful:

  • emotion (I had a list of available emotion in my grammar file)
  • affectionChange (describe immediate character reaction to user's word and actions. Useful for visual novel immersion and progressionmanipulation)
  • location (LLM is good at tracking whereabout and movement.)
  • isDoingXXX (LLM is capable of detecting start and end of activities)
  • typeOfXXX (LLM also know what the activity is, for example if the character is cooking, then a property called "whatIsCooking" will show eggs and hams)

I'm building a Renpy game using the above, and the prototype is successful, LLM can indeed meaningfully interact with the game world, and acts as more of a director than some text gen engine. I would assume that a D&D game can be built with a grammar file with some properties such as "EnemyList", "Stats", etc.

For my usage, 7b Mistral or 13b are already giving sensible/accurate results. Oobabooga allows grammar usage with exllamaHF (and AWQ but haven't tried).

How is everyone using grammar to integrate LLM as a functional component of your software? Any tips and tricks?

top 2 comments
sorted by: hot top controversial new old

We used a similar system in our project here,we had the llm also identify which location on the map the actions should be done at. https://youtu.be/e2UGwXpu_zc?si=hrV0Vnq7C-9BZ42i

[โ€“] zware@alien.top 1 points 1 year ago

How do you get it to work with ExLlama or ExLlamav2?

It works beautifully with llama.cpp, but with GPTQ models the responses are always empty.

zephyr-7B-beta-GPTQ:gptq-4bit-32g-actorder_True:

{"emotion": "surprised", "affectionChange": 0, "location": "^", "feeling": "^", "action": [],"reply": "^"}

zephyr-7b-beta.Q4_K_M.gguf:

{"emotion":"surprised","affectionChange":0.5,"location":"office","feeling":"anxious","action":["looking around the environment"],"reply":"Hello! I'm Lilla, nice to meet you!"}

This is my grammar definition:

root ::= RoleplayCharacter
RoleplayCharacter ::= "{"   ws   "\"emotion\":"   ws   Emotion   ","   ws   "\"affectionChange\":"   ws   number   ","   ws   "\"location\":"   ws   string   ","   ws   "\"feeling\":"   ws   string   ","   ws   "\"action\":"   ws   stringlist   ","   ws   "\"reply\":"   ws   string   "}"
RoleplayCharacterlist ::= "[]" | "["   ws   RoleplayCharacter   (","   ws   RoleplayCharacter)*   "]"
Emotion ::= "\"happy\"" | "\"sad\"" | "\"angry\"" | "\"surprised\""
string ::= "\""   ([^"]*)   "\""
boolean ::= "true" | "false"
ws ::= [ \t\n]*
number ::= [0-9]+   "."?   [0-9]*
stringlist ::= "["   ws   "]" | "["   ws   string   (","   ws   string)*   ws   "]"
numberlist ::= "["   ws   "]" | "["   ws   string   (","   ws   number)*   ws   "]"

Do you need to "prime" the models using prompts to generate the proper output?