this post was submitted on 18 Nov 2023
1 points (100.0% liked)

LocalLLaMA

3 readers
1 users here now

Community to discuss about Llama, the family of large language models created by Meta AI.

founded 1 year ago
MODERATORS
 

So let's say we ask an LLM to predict what would happen if we put a pen on the table. And it simulates a thousand possibilities. Is there an LLM that would run perpendicular to these outputs as a sort of summarizer/filter. Is there a project working on anything like this?

Been looking, not finding. Thanks!

top 4 comments
sorted by: hot top controversial new old
[–] Exotic-Estimate8355@alien.top 1 points 1 year ago (1 children)

High temperature + batch inference on the same prompt

[–] Away-Bird-6339@alien.top 1 points 1 year ago

Yeah I agree this is decent, there can be another layer of prompt with some logic to help it along more particular paths as well in combo with temp adjustment. But that's the parallel processing, what about the perpendicular? Just another layer of LLM taking in all the answers and choosing the best? I'm hoping for something a bit more integrated than that. Any ideas?

[–] Robot_Graffiti@alien.top 1 points 1 year ago

"Simulate" is probably too strong a word for what happens when you ask an LLM what happens in a physical scenario. It's more like "make up a story." It doesn't picture the scene in its mind or do any kind of physics, it only thinks about the words.

But, in principle, yeah, you could make it tell you a thousand stories, and have it classify each story into predefined categories, and then use some other program to count up the totals (LLMs suck at counting).

[–] kulchacop@alien.top 1 points 1 year ago

The closest I can think of is Prompt Breeder and Graph of Thought. These techniques focus on improving the final response by generating many intermediate responses.