this post was submitted on 16 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Here's how I managed to bootstrap generalized Tree-of-Thought capability in my AIs.

This was the secret sauce to SynthIA.

Generate your dataset with this, plus the Orca system prompts.

Open Source FTW. LFG!

https://preview.redd.it/45uyzynlen0c1.png?width=1744&format=png&auto=webp&s=694e69603c0656efbbea9a9e8b18d02a10c8633e

top 5 comments
sorted by: hot top controversial new old
[–] herozorro@alien.top 1 points 10 months ago

can someone explain what is meant by this

Generate your dataset with this, plus the Orca system prompts.

[–] FutureIsMine@alien.top 1 points 10 months ago

What model was used with that prompt for bootstrapping the data for a training set? Did you then take all that data and fine-tune it on the model used to bootstrap the initial dataset?

[–] Single_Ring4886@alien.top 1 points 10 months ago

Aaaahyou are the guy who proposed HelixNet, the names sometime blurr on reddit. But reading your prompt it is clear you are very smart and not afraid to explore new ways.

I know my praise seems generic and there is lot great people around but I really think you are among top people.

Did you used GPT4 to generate dataset or other models?

[–] Distinct-Target7503@alien.top 1 points 10 months ago

Do you have an alternative version for chain of thoughts?

[–] YourTechBud@alien.top 1 points 10 months ago

This seems really exciting. I'm kinda new to this, so sorry for asking such a noob question.

Is this a system message for the synthia model or Is this prompt for GPT4 to generate the dataset? If so, how do you generate a "generalized" dataset. Is it by passing different user prompts? If so, how do you decide what user prompts you should provide.

And is it right that you want to use that dataset to fine tune the model? So you could have a dataset in a particular domain to improve the resulting models reasoning capability in domain?