this post was submitted on 25 Apr 2024
1104 points (97.5% liked)
Technology
59269 readers
4068 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh no, forming your ideas into comprehensible essay format with intersentence connectivity and flow, maybe even splitting into paragraphs, isn't even close to LLM speech.
I do form long, connected, split texts and comments, too, but there is a great difference between mine and an LLMs tone, cadence, mood or whatever you wanna call these things.
For example, humans usually cut corners when forming sentences and paragraphs, even if when forming long ones. We do this via lazy grammar use, unrestricted thesaurus selection, uneven sentence or paragraph lengths, lots of phrase abbreviations e.g. "tbh", lax use of punctuations e.g. "(ChatGPT?)", which also is a substitution for a whole question sentence.
Also, the bland, upbeat and respecting tone the bots mimic from long-thought essays is never kept up in spontaneous writing/typing. Dead giveaway of a script-speech than genuine, on-point and assuming human interaction.
Us LLMs can't do these with rather simple reverse-jenga syntax and semantics forming, with simple formal pragmatics sprinkled, yet. The wild west, very expansive, extended pragmatics of a language is where the real shit is at.
This is a phrase an AI (as they are now) would never use. To these LLMs, something is either a fact or it thinks it's a fact. They leave no room for interpretation. These AIs will never say, "I'm not sure, maybe. It's up to you." Because that's not a fact. It's not a data point to be ingested.