Well, it's being used to WRITE the papers, might as well use it for grading too...
Tech
A community for high quality news and discussion around technological advancements and changes
Things that fit:
- New tech releases
- Major tech changes
- Major milestones for tech
- Major tech news such as data breaches, discontinuation
Things that don't fit
- Minor app updates
- Government legislation
- Company news
- Opinion pieces
This really seems like an onion headline. ChatGPT grading ChatGPT essays.
AI didn’t take our jobs, we gave them to it .
Soon kids will start talking like LLMs.
Soon kids will start talking like LLMs.
Always have, always will.
My pet hypothesis is that our brains are, in effect, LLMs that are trained via input from our senses and by the output of the other LLMs (brains) in our environment.
It explains why we so often get stuck in unproductive loops like flat Earth theories.
It explains why new theories are treated as "hallucinations" regardless of their veracity (cf Copernicus, Galileo, Bruno). It explains why certain "prompts" cause mass "hallucination" (Wakefield and anti-vaxers). It explains why the vast majority of people spend the vast majority of their time just coasting on "local inputs" to "common sense" (personal models of the world that, in their simplicity, often have substantial overlap with others).
It explains why we spend so much time on "prompt engineering" (propaganda, sound bites, just-so stories, PR "spin", etc) and so little on "model development" (education and training). (And why so much "education" is more like prompt engineering than model development.)
Finally, it explains why "scientific" methods of thinking are so rare, even among those who are actually good at it. To think scientifically requires not just the right training, but an actual change in the underlying model. One of the most egregious examples is Linus Pauling, winner of the Nobel Prize in chemistry and vitamin C wackadoodle.
You have it backwards. It isn't that we operate like LLMs, it is that LLMs are attempts to emulate us.
That is actually my point. I may not have made it clear in this thread, but my claim is not that our brains behave like LLMs, but that they are LLMs.
That is, our LLM research is not just emulating our mental processes, but showing us how they actually work.
Most people think there is something magic in our thinking, that mind is separate from brain, that thinking is, in effect, supernatural. I'm making the claim that LLMs are actual demonstrations that thinking is nothing more than the statistical rearrangement of that which has been ingested through our senses, our interactions with the world, and our experience of what has and has not worked.
Searles proposed a thought experiment called the "Chinese Room" in an attempt to discredit the idea that a machine could either think or understand. My contention is that our brains, being machines, are in fact just suitably sophisticated "Chinese Rooms".
Soon kids will start talking like LLMs.
When I read that I had some sort of epiphany - "wow - maybe our brains are just LLMs", and it felt weird. Probably not weird enough to change my model, but still weird.
Glad you wrote this comment - you said it so much better than I could have.
Edit - my model is going wild here. New thought - if our brains are LLMs, how do the brains in all the other species (without language) work? I guess a LLM is just a special case of a Large Sensory Input Model.
2nd edit - of course our brains are "just LLMs" - LLMs are special cases of computer simulations of neural networks modelled on brains. I know the logic is backwards and I'm a bit slow, but it still feels weird to read LLM written articles and realise that we use a more evolved version of the same process to do basically - everything.
AI =/= LLMs. AI are neural networks that are modeled after the human brain in every capacity possible on a current computers. Neural networks can be trained on text to create LLMs. They can be trained on photos to create image generators like stable diffusion. They can be trained on audio to speak exactly like someone or generate music. They can be put into control loops the learn movements for robots like boston dynamics. Neural networks are just small(for now) brains trained to do one thing.
We can already combine these to do pretty crazy things, they're only going to get more powerful, more efficient, more integrated, and more capable. AGI Singularity will happen, and probably sooner than we think.
Thanks! I've been working on this idea for quite a while. I post summaries and random thoughts occasionally hoping to refine my thinking to the point at which I'll feel comfortable writing a proper essay.
I like the name you've given the overarching system. That's been a bit of a struggle for me, so you've given me a better concept to work with. "Large Sensory Input Model" captures my thoughts better than my own "the brain is just a kind of LLM." That it's abbreviation "LSIM" also conjures connections to "simulation" is a bonus for me, because that also addresses my thoughts on how we understand some things and other people.
There is a fairly old hypothesis that something called "Theory of Mind" is basically our brain modelling and simulating other brains as a way to understand and predict the behaviour of others. That has explanatory power: empathy, stereotypes, in/out groups, better accuracy with closer relationships, "living on" through powerful simulations of those closest to us who have died, etc.
Thanks for the feedback!
LLMs are neural networks which is an AI technique modelled on how our neurons work (in a very simple way) so you are kind of right but have it backwards.
Some teachers now training their replacements, free of charge, with thousands of hours of invaluable subject matter expertise around nuanced execution of their work.
Stahhhpppp.
This is the best summary I could come up with:
The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers.
"Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores."
"Once in Writable you can also use AI to curriculum units based on any novel, generate essays, multi-section assignments, multiple-choice questions, and more, all with included answer keys," the site claims.
Yet, as Axios reports, proponents assert that AI grading tools like Writable may free up valuable time for teachers, enabling them to focus on more creative and impactful teaching activities.
The company selling Writable promotes it as a way to empower educators, supposedly offering them the flexibility to allocate more time to direct student interaction and personalized teaching.
As the generative AI craze permeates every space, it's no surprise that Writable isn't the only AI-powered grading tool on the market.
The original article contains 458 words, the summary contains 150 words. Saved 67%. I'm a bot and I'm open source!
This is an inappropriate use of LLMs in their current form. They will consistently recommend bad practice and produce incorrect statements. Why would a community want to pay for this?
Does no one see the irony of this?
Yes, literally everyone is.