this post was submitted on 20 Nov 2023
1 points (100.0% liked)

LocalLLaMA

1 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 10 months ago
MODERATORS
 

I'm trying to perfect a dev tool for python developers to easily scale their code to thousands of cloud resources using only one line of code.

I want to get some project ideas so I can build useful tutorials for running inference and fine tuning open source LLMs.

A few weeks back I created a tutorial teaching people to massively parallelize inference with Mistral-7B. I was able to deliver a ton of value to a select few people and it helped me better understand the flaws with my tool.

Anyways I want to open it up to the community before I decide what tutorials I should prioritize. Please drop any project/tutorial ideas and if you think someone's idea is good please upvote them (so I know you think it would be valuable).

you are viewing a single comment's thread
view the rest of the comments
[–] CocksuckerDynamo@alien.top 1 points 10 months ago

what is different/better about whatever you are attempting to suggest compared to the existing prominent solutions such as vLLM, TensorRT-LLM, etc?

it's not clear to me exactly what the value proposition is of what you're offering.