this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

My main usecase for LLMs is literally as an auto-complete, mainly via coding, so I was wondering whether anyone has played with/had any luck using the base model for use cases that are close to simply auto completing? I could imagine the instruction tuning adding a sycophancy bias in those areas

you are viewing a single comment's thread
view the rest of the comments
[โ€“] FPham@alien.top 1 points 11 months ago (1 children)

Get a base model of your choice, finetune it with plain text book of a style you want it to talk. Done.

[โ€“] wojcech@alien.top 1 points 11 months ago

Fine tune as in gradient updates or as in ICL?