this post was submitted on 30 Oct 2023
1 points (100.0% liked)

Machine Learning

1 readers
1 users here now

Community Rules:

founded 1 year ago
MODERATORS
 

I want to know the tools and methods you use for the observability and monitoring of your ML (LLM) performance and responses in production.

you are viewing a single comment's thread
view the rest of the comments

Hi there, Langfuse founder here. We're building observability & analytics in open source (MIT). You can instrument your LLM via our SDKs (JS/TS & Python) or integrations (e.g. LangChain) and collect all the data you want to observe. The product is model-agnostic & customizable.

We've pre-built dashboards you can use to analyze e.g. cost, latency and token usage in detailled breakdowns.

Now, we're starting to build (model-based) evaluations right now to get a grip on quality. You can manually ingest scores via our SDKs, too. + export as .csv and via get API.

Would love to hear feedback from folks on this reddit on what we've built and feel free to message me here or at contact at langfuse dot com

We have an open demo so you can have a look around a project with sample data.