this post was submitted on 19 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

Hi community,

i am writing my own GUI in wich i want to use a LLM completely local. The problem is i dont know how to start with the LLM.

Can someone explain to me how the first steps work to integrate/work with the LLM or does someone know some good tutorials?

The LLM is downloaded localy. Now i need to integrate a library or something? sry i could not find a lot useful/direct information about the topic.

Thank you very much in advance!

you are viewing a single comment's thread
view the rest of the comments
[โ€“] CultOfAmagi@alien.top 1 points 1 year ago (2 children)

Thanks for the comments so far!

My intention was to learn about LLM and to put that into practice. I am aware of already existing projekts and have something working already. It just feels a bit overloaded or impractical.

[โ€“] catzilla_06790@alien.top 1 points 1 year ago

I wanted to learn about writing an application using LLMs. My goal was to write something that I could load a set of PDFs and ask questions about them., all on my PC.

I used PySide6 (Qt for Python) for the GUI since I was pretty familiar with Qt and C++, and I'm not a fan or running apps in a browser.

I used a combination of HuggingFace APIs and Langchain to write the application itself, and ended up with a somewhat generalized application that I could load different models.

It works, maybe not perfectly, but I did accomplish what I wanted, to learn about implementing an application.

I did the same thing with Stable Diffusion models, but with just HuggingFace APIs, no Langchain.

load more comments (1 replies)