this post was submitted on 27 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

https://preview.redd.it/3krgd1sg2z2c1.png?width=800&format=png&auto=webp&s=b76c5fb9fa22938c74ec3095f63adaec8ff2219d

I came across this new finetuned model based on Openchat 3.5 which is apparently trained used Reinforcement Learning from AI Feedback (RLAIF).

https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha

Check out this tweet: https://twitter.com/bindureddy/status/1729253715549602071

you are viewing a single comment's thread
view the rest of the comments
[–] pseudonerv@alien.top 1 points 9 months ago (2 children)

Form huggingface model card,

Starling-RM-7B-alpha is a reward model trained from Llama2-7B-Chat.

From their webpage, https://starling.cs.berkeley.edu

Our reward model is fine-tuned from Llama2-7B-Chat

Yet, the model config.json

"max_position_embeddings": 8192,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 10000.0,
"sliding_window": 4096,

SO? Whoever is doing the PR has no f***ing idea what their student labors are actually doing.

[–] Warm_Shelter1866@alien.top 1 points 9 months ago

What does it mean that an LLM is a reward model ? , I always thought of rewards only in the RL field . And how would the reward model be used during finetuning?

load more comments (1 replies)