Lemmy Shitpost
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
Rules:
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
view the rest of the comments
I’ve said this on Lemmy a few times before but 25+ years ago my AI dissertation was on a mushroom identification algorithm, which concluded that even with all the computing power in the world it would not be possible to create an infallible system, and as such was wholly unethical to create, when the cost of failure is death.
25 years later and AI is still the same, we’ve just decided to give it all that computing power.
Just simulate an actual brain on a computer, forget AI.
We are a few years away from that.
The real challenge is x10 million speed simulation of a human brain.
By that logic it would be unethical for an expert to give advice, or to even teach others to identify mushrooms, since they too are fallible and it could lead to death?
Or saying it was unethical to invent cars because they can (and most certainly do) cause deaths.
Almost everything would be unethical really, the world is chaotic, nothing is perfect, deaths happen, all we can do is work to reduce the risks
What makes an expert is the ability to say "this is unequivocally safe to eat, because I can positively identify it based on this and this feature", as well as "it is not possible/I am not able to confidently identify this mushroom as safe"
So an AI that can identify mushrooms and also tell the user if a mushroom is too similar to a different dangerous mushroom to be identified with a high enough certanity for it to be safe, would be ethical?
Then how can anyone claim that no such system can ever be created? That makes no sense
A 2D visual representation is not the same as the real thing
So experts cannot identify mushrooms at all by looking at it?
They might turn it around and look at it from different angles, but then just make an AI that takes in multiple images from different angles, maybe have it ask for different angles if it cannot see everything it needs to see.
And if the experts use other senses besides vision, like smell and touch, just make an AI that says "it might be X or Y, only way to tell them apart is through the smell, so i can't be sure"
It’s just anti-AI hate. They’re like flat-earthers
This is the most pot calling the kettle black statement I have ever seen lmao
Yeah, over the top AI hype is annoying, and there are many valid criticisms to be had with regard to how AI is being trained and used (mainly generative AI),
but all this absolutist anti-AI nonsense beats everything
Now I don’t profess to remember the entire paper, but one section was certainly “Human factors” the difference between an expert is a human can place emphasis on the dangers above all else which an AI is often incapable of portraying, and the car will still have a human driver.
The whole point was this was a very limited and narrow language model, with AI image recognition with the assumption that the thing the human was describing and picturing is a mushroom and it’s still fallible. Specifically a mushroom identification program is a really bad idea and absolutely unethical to create, a system that answers any question you ask it where you sort out the guardrails as you go… that’s dangerous.
So the argument is that you tried an AI once and it didn't do a thing, therefore it is impossible to create an AI that is able to do it?
Let's say we reach the point where we can scan and then simulate the entire brain of a mushroom expert, then you'd have an AI that would give the same responses as a human expert would, is it ethical now? (Ignoring the ethics of simulating a person like that)
Simple classification problems are relatively trivial, just train an image classifier to take in a picture of a mushroom and have it predict the type, as well as whether or not the mushroom is similar to a dangerous one, and for good measure whether the picture is good enough to give reliable results. Train it based on feedback from experts and it should end up as reliable as the experts it was based on
Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀
Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.
Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.
Oh are you walking back the "it would be unethical" claim, and the claim that AI model cannot give nuanced responses like a human can?
Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?
If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing