this post was submitted on 26 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hi I am a newbie c# dev, I am trying to create a home project and until recently I was using Llamasharp. There is little support for it and since the last updates I've been unable to get it to work at all with the recent updates.

i'm trying to build a little chat wpf application which can either load AWQ or GGUF LLM files. Are there any simple and easy to use libraries out there which I can facilitate in c#?

I have a GTX 3060 and I'd preferably like to use my GPU RAM if it's faster than using DDR4 RAM. I admit I am under a few misconceptions. Ideally I'd like to be able to load the Mistral models in c#.

https://preview.redd.it/6tx5ij2imm2c1.jpg?width=877&format=pjpg&auto=webp&s=53e2a07f53e5d7e15ebbe727d6930bfd3bbea25b

top 7 comments
sorted by: hot top controversial new old
[–] Apprehensive_Cut1806@alien.top 1 points 9 months ago (1 children)
[–] laca_komputilulo@alien.top 1 points 9 months ago

ms semantic kernel

You could start with either of the folowing:

- https://learn.microsoft.com/en-us/dotnet/api/microsoft.semantickernel.connectors.ai.oobabooga.textcompletion?view=semantic-kernel-dotnet

- https://github.com/microsoft/semantic-kernel/pull/1357

Run ooba with the --api arg. Finish prototyping your code for the problem you wanted to solve, and then you could revisit the question of how to run inference natively within CLR.

[–] TheTerrasque@alien.top 1 points 9 months ago

I don't know an alternative, but I did some experimenting with it. I kinda rewrote large parts of it, and I also used a custom build of llama.cpp dll's. I'm pretty sure it'll still work with the newest llama.cpp build, you might need to update some native calls if they've been expanded or renamed.

My changes are at https://github.com/TheTerrasque/LLamaSharp/tree/feature/clblast - I haven't really documented it much, but maybe the git history will help

[–] laca_komputilulo@alien.top 1 points 9 months ago

This answer is somewhat OT, but may be the best answer for your situation. Take it from someone who started coding C# in 2001.

The worst mistake a Dev can make is call themselves "Im a ___ Dev". This is an option limiting mental handicap.

Way back I sunk all my interest in the Semantic Web on porting Jena into NJena. Almost finished the conversion but never built anything useful.

For your problem, dockerize Ooba, llamacpp, etc exposing an api endpoint, call API via ms semantic kernel from your wpf app. Profit...

Better spend your time on learning containerisation then on coping with missing options in you chosen ecosystem.

[–] ThisGonBHard@alien.top 1 points 9 months ago

The whole AI ecosystem was pretty much designed for python from the ground up.

I am guessing you can run C# as the front end, and python as back end.

[–] _Lee_B_@alien.top 1 points 9 months ago

You DO NOT NEED TO LOAD AND RUN MODELS to use AI. Run a server like text-generation-webui, then use its API.

[–] mrjackspade@alien.top 1 points 9 months ago

Gonna be honest, you can totally just skip LlamaSharp and call the Llama.dll methods using interop in C#

Its really not difficult to do and it cuts an entire layer of dependency out of your project.