this post was submitted on 13 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

Hey LocalLLaMA. It's Higgsfield AI, and we train huge foundational models.

We have a massive GPU cluster and developed our own infrastructure to manage the cluster and train massive models. We constantly lurked in this subreddit and learned a lot from this passionate community. Right now, we have spare GPUs, and we are excited to give back to this incredible community.

We built a simple web app where you can upload your datasets to finetune it. https://higgsfield.ai/

There's how it works:

  1. You upload the dataset with preconfigured format into HuggingFaсe [1].
  2. Choose your LLM (e.g. LLaMa 70B, Mistral 7B)
  3. Place your submission into the queue
  4. Wait for it to get trained.
  5. Then you get your trained model there on HuggingFace.

[1]: https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials

you are viewing a single comment's thread
view the rest of the comments
[–] dahara111@alien.top 1 points 10 months ago

Registered. I am very interested and grateful to use it, but I haven't uploaded the dataset to huggingface, so I can't use it yet.

And I don't understand the new learning paradigm that is done just by registering the model and dataset.

What is it that is running behind the scenes?
A very simple snippet OR code would be helpful to understand.

For example

If I give you a model and a dataset, the code will run something like this, and under what conditions will the training be finished.