92
AI chatbots were tasked to run a tech company. They built software in under seven minutes — for less than $1.
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
"I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well"
For how long will we be forced to endure different versions of the same article?
Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it "execute flawlessly." Whether or not code executes it not the bar by which we judge it.
Even if it were to hit 100%, which it does not, there's so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.
LLMs are not capable of this kind of meaningful collaboration, despite all this hype.
So what you're saying is that 86.66% of the time, it works every time.
Thank you for writing this so I only have to ~~upvore~~ upvote you.
Edit: What the difference between one key can be
I don't know what an upvore is and I don't want to know.