foenix

joined 1 year ago
[–] foenix@lemm.ee 6 points 4 months ago
[–] foenix@lemm.ee 20 points 4 months ago (2 children)

And you didn't even proofread the output...

[–] foenix@lemm.ee 26 points 4 months ago

I've used crewai and autogen in production... And I still agree with the person you're replying to.

The 2 main problems with agentic approaches I've discovered this far:

  • One mistake or hallucination will propagate to the rest of the agentic task. I've even tried adding a QA agent for this purpose but what ends up happening is those agents aren't reliable and also leads to the main issue:

  • It's very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

One last consideration: the current LLM providers are very aware of these issues or they wouldn't be as concerned with finding "clean" data to scrape from the web vs using agents to train agents.

If you're using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you're interested in the code.

Personally, I'm kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

[–] foenix@lemm.ee 8 points 1 year ago

1000% this. I used to run a weekly radio show and could only get through about 12 episodes before I started running out of steam. It takes a lot to produce an hour of polished audio editing plus interviews plus research plus writing -- it all adds up. I think at around 100 episodes that would be like 20-30 novels worth of content depending on how you slice it.

[–] foenix@lemm.ee 25 points 1 year ago (5 children)

Episodes 99 and 100 specifically. NSO group is weaponizing our loss of privacy.