this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hi folks, I have edited the llama.cpp server frontend and made it look nicer. Also added a few functions. Something I have been missing there for a long time: Templates for Prompt Formats.

Here to the github link: ++camalL

Otherwise here is a small summary:

- UI with CSS to make it look nicer and cleaner overall.

- CSS outsourced as a separate file

- Added a dropdown menu with prompt style templates

- Added a dropdown menu with system prompts

- Prompt Styles and System Prompts are separate files, so editing is very easy.

- Created a script that uses "dialog" to compose the command for the server.

- Script offers the possibility to save and load configs

In planning or already started:

- WIP Multilingual: You will be able to select the language from a dropdown menu. So far language files only for English and German. (concerns UI elements and system prompts).

- Dark Mode

- Templates for the values of the UI options (samplers etc.), e.g. deterministic template, creative template, balanced template etc...

- Zenity start script (like dialog, but gui)

---

As for the prompt format templates, I just picked a few by feel. The most important are the four to which almost all others can be traced back: Alpaca, ChatML, Llama2, Vicuna.

But if you want more templates for a specific model, feel free to let me know here or on github.

As you can see on the third picture, it should now be easier for beginners to use the llama.cpp server, since a tui dialog will assist them.

Hope you like my work. Feel free to give feedback

ps: I've made a pull request, but for now I publish it on my own forked repo.

ui-1

ui-2

tui-1

you are viewing a single comment's thread
view the rest of the comments
[–] involviert@alien.top 1 points 11 months ago (3 children)

Regarding that "prediction" setting, what exactly is it? I remember n_predict from using llama.cpp directly but I think i always set it to -1 for like max. And I think I don't even have such a setting in llama-cpp-python?

[–] Evening_Ad6637@alien.top 1 points 11 months ago (2 children)

Yes it means predict n tokens. Is it not easy to understand? I might change it back... For me it is important that an ui is also not overbloated with "words" and unfortunately "Predict_n Tokens".. how can I say.. it 'looks' aweful. So I am looking for something more aesthetic but also easy to understand. It's difficult for me to find.

[–] involviert@alien.top 1 points 11 months ago

Oh it wasnt about your choice of words, that seems fine.

[–] uhuge@alien.top 1 points 11 months ago

not sure if usable, but "rounds" or "amount" seem good alternatives.