this post was submitted on 01 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 2 years ago
MODERATORS
 

I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated πŸ™πŸ»

top 7 comments
sorted by: hot top controversial new old
[–] vanlifecoder@alien.top 1 points 2 years ago

Task specific models chained together: nux.ai

[–] UndocumentedMartian@alien.top 1 points 2 years ago (1 children)

By not using LLMs to do the modelling. Use specialized models for data analysis and use an LLM to orchestrate those models and communicate with the user. LLMs are not cheap to run, though, so you may want to do a cost/benefit analysis.

[–] software-n-erd@alien.top 1 points 2 years ago

Gotcha, I was honestly not aware of any data analysis models. Have you ever used any of them which you think I should look at?

[–] Seankala@alien.top 1 points 2 years ago (1 children)

The fact that this is actually getting upvoted is really a sign about what's happened to this community.

[–] software-n-erd@alien.top 1 points 2 years ago

I guess people just want to learn. If you think this isn’t the right approach just say it :)

[–] EvM@alien.top 1 points 2 years ago (1 children)

The short answer is: you can't. If you want a reliable system that never hallucinates, use rules/templates. It's also easier to maintain. Ehud Reiter has written extensively about this.

[–] software-n-erd@alien.top 1 points 2 years ago