this post was submitted on 19 Nov 2023
1 points (100.0% liked)
LocalLLaMA
3 readers
1 users here now
Community to discuss about Llama, the family of large language models created by Meta AI.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Thanks for the comments so far!
My intention was to learn about LLM and to put that into practice. I am aware of already existing projekts and have something working already. It just feels a bit overloaded or impractical.
I wanted to learn about writing an application using LLMs. My goal was to write something that I could load a set of PDFs and ask questions about them., all on my PC.
I used PySide6 (Qt for Python) for the GUI since I was pretty familiar with Qt and C++, and I'm not a fan or running apps in a browser.
I used a combination of HuggingFace APIs and Langchain to write the application itself, and ended up with a somewhat generalized application that I could load different models.
It works, maybe not perfectly, but I did accomplish what I wanted, to learn about implementing an application.
I did the same thing with Stable Diffusion models, but with just HuggingFace APIs, no Langchain.
If you want to learn how to run an llm in inference mode locally, you should code that at the command line, not waste time building a GUI app. Build a GUI app if you want to learn how to build GUI apps, or if you've already proven out the engine at the command line, and now need a pretty GUI around it.