this post was submitted on 01 Nov 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I want to use LLMs to automate analysing data and use it to provide insights to my users, but often times I notice insights being generated on factually incorrect data. I tried fine tuning my prompts, the structure in which I pass data to LLM, few shot learning but there still some chance of it to hallucinate. How can I create a production ready application where this insights are surfaced to end users and presenting incorrect insights is not accepted? I am out of ideas. Any guidance is appreciated πŸ™πŸ»

you are viewing a single comment's thread
view the rest of the comments
[–] Seankala@alien.top 1 points 1 year ago (1 children)

The fact that this is actually getting upvoted is really a sign about what's happened to this community.

[–] software-n-erd@alien.top 1 points 1 year ago

I guess people just want to learn. If you think this isn’t the right approach just say it :)