this post was submitted on 21 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I tried to run locally various networks for code completion, but it turns out most of the plugins are just directly feed data to neural network and that’s why they all suck, so I built a new that has required preprocessing and uses ollama instead of something weird.

top 7 comments
sorted by: hot top controversial new old
[–] geepytee@alien.top 1 points 10 months ago

I built a new that has required preprocessing

Nice! How are you preprocessing the code?

[–] SignalCompetitive582@alien.top 1 points 10 months ago (1 children)

Well I don’t really know why. But on my M1 MacBook, the extension freezes my entire system when it wants to do its thing, and even if I wait, nothing happens. If you have any way to fix it I’d take it, as I’d really like to see as much of a potential it could be.

[–] duplissi@alien.top 1 points 10 months ago (1 children)
[–] SignalCompetitive582@alien.top 1 points 10 months ago

8 Go of ram

[–] aka457@alien.top 1 points 10 months ago

The link to ollama is wonked.

[–] r3tardslayer@alien.top 1 points 10 months ago

How good is the olama model compared to something Codewizard 15B?

also would it let me force it to give me things the AI considers morally wrong?

[–] sammcj@alien.top 1 points 10 months ago

The readme states "🚀 As good as Copilot" - that's a massive claim which I highly doubt. Does it even have the context of your repo and open tabs?