RiskApprehensive9770

joined 1 year ago
 

https://higgsfield.ai/chat

Hey LocalLLaMA, Higgsfield AI here

A few days ago, we built an easy-to-use platform for everyone in the community to finetune models. Many of you uploaded datasets, and they are waiting in the queue for training.

We received a lot of feedback, and many of you reached out, wanting the opportunity to try out the models.

We are happy to announce we made a chat interface for you to do that.

Let us know what you think.

Shout out to u/WolframRavenwolf and his efforts in comparing the LLMs.

His post inspired the list of models we support now and we will extend it sooner.

  • HuggingFaceH4/zephyr-7b-beta
  • teknium/OpenHermes-2-Mistral-7B
  • jondurbin/airoboros-m-7b-3.1.2
  • ehartford/dolphin-2.1-mistral-7b
  • migtissera/SynthIA-7B-v1.3
  • mistralai/Mistral-7B-Instruct-v0.1
  • migtissera/SynthIA-7B-v2.0
  • teknium/CollectiveCognition-v1.1-Mistral-7B
  • ehartford/dolphin-2.2-yi-34b
  • NurtureAI/openchat_3.5-16k

Stay fine-tuned for future updates :)

[–] RiskApprehensive9770@alien.top 1 points 1 year ago (1 children)

You can train on any dataset as long as it follows our format.

Soon we'll publish a video tutorial.

 

Hey LocalLLaMA. It's Higgsfield AI, and we train huge foundational models.

We have a massive GPU cluster and developed our own infrastructure to manage the cluster and train massive models. We constantly lurked in this subreddit and learned a lot from this passionate community. Right now, we have spare GPUs, and we are excited to give back to this incredible community.

We built a simple web app where you can upload your datasets to finetune it. https://higgsfield.ai/

There's how it works:

  1. You upload the dataset with preconfigured format into HuggingFaсe [1].
  2. Choose your LLM (e.g. LLaMa 70B, Mistral 7B)
  3. Place your submission into the queue
  4. Wait for it to get trained.
  5. Then you get your trained model there on HuggingFace.

[1]: https://github.com/higgsfield-ai/higgsfield/tree/main/tutorials