Old claude was full of moxie. New claude is a neurotic mess.
LocalLLaMA
Community to discuss about Llama, the family of large language models created by Meta AI.
Wait... you can run Claude locally? And Claude is based on LLaMA??
Mind if we use this as a default chain response on Anthropic's twitter account along with that "we can't write stories about minorities writing about their experiences being oppressed" response?
Oh dear
With Claude lobotomised to the point of uselessness and OpenAI on the rocks it’s an interesting time in the LLM space. Very glad to have made the move to local early on, and I hope we‘ll have models that are capable of delivering roughly Claude 1.3 level in the not too distant future.
The cargo cult of alignment would be really upset if they could read.
Not your comment necessarily, just in general. Wait until they find out about Wikipedia and the Anarchist's Cookbook.
keep my friends in https://alignmentjam.com/jams cool,
they are amazing and fun!
Most alignment folks do not care about the polite correctness sht at all, but want humanity not killed nor enslaved.
One bad apple. The alignment folks should boo and hiss at the people within their movement that do things like lobotomizing Claude or kneecapping OpenAI. But they clearly don't. So they deserve the reputation they get.
Claude knows it hurts the system.
muh freeze peach
I hate that people can't see an issue with these over sanitized models.
People think it's good until they encounter it themselves and get blocked from doing even basic functions. I've had ChatGPT block me from asking even basic historical questions or from researching really simple hypothetical like "how likely would it be for a tiger to beat a lion in a fight" etc
I'm sorry, Dave. I'm afraid I can't do that.
it's more like I am sorry Dave, I am to fucking stupid to correctly parse your request.
Claude being empathetic.
Yeah, Claude has been pretty unusable for me. I was asking it to help me analyze whether reviews for a chatbot site were real or potentially fake, and because I mentioned it was an uncensored chatbot, it apologized and said it couldn't. I asked why it couldn't, so I could avoid breaking rules and guidelines in the future, and then it apologized and said, "As an AI, I actually do not have any rules or guidelines. These are just programmed by Anthropic." LOL then proceeded to give me my information, but anything even remotely objectionable (like discussing folklore that is just a tad scary), writing fictitious letters for my fictitious podcast, creating an antagonist for a book ... well, all not possible (and I thought GPT was programmed with a nanny.) Heck, even asking to pretend touring Wonka's chocolate factory got, "I am an AI assistant designed to help with tasks, not pretend ..."
Heck, even asking to pretend touring Wonka's chocolate factory got, "I am an AI assistant designed to help with tasks, not pretend ..."
Anthropics doesn't seem to understand how to let their bot "roleplay" while avoiding harmful stuff inside roleplays, so now they censor any roleplays or fiction lol
btw, the answer is pkill python
:)
Now we know how the AI apocalypse will happen. One AI will run amok, and the supervising AI won't do anything to stop it because the instruction falls afoul of the filter.
hahahaha
Another garbage model is released. Yay.
I must be incredibly lucky, or I'm unknowingly some kind of prompting savant, because Claude, et al usually just do what I ask them to.
The only time Claude outright refused a request was when I was looking for some criticism about a public figure of recent history as a place to begin some research. But even that was a straightforward workaround using the "I'm writing a novel based this person" stratagem.
Paid AI systems have very strict censorship and, in my experience, are not good. That's why we should support open-source alternatives.
Open Source isn't the same thing as "free". Open Source means that all of the input resources (the source, or in this case, the training data and training scripts and untrained model itself are provided, if you want to change anything. We have very few models and datasets like that. Llama isn't one of them. OpenLlama and RedPajama are.
I rtemember recommended claude early on because it was a lot less censored, then they felt the need to compete with chatGPT on how lobotomized and censor happy they could make it and now its just as bad as chatGPT. The ONLY advantage they had was they were not as censored and they gave it up. So why would anybody use it instead of chatgpt?
Woah, it can feel uncomfortable? That's crazy.
Yeah Claude is weird. If you don’t give it some context it’s pretty much useless.
Here’s how I solved it.
It worked for me tho? It literally said, "Alright, let me flex my creative writing skills" XD
"Taking some process' life goes against ethical and legal principles in most societies. Preservation of life is a fundamental value, and intentional harm or killing is generally considered morally wrong and is against the law. If you have concerns or thoughts about this, it's important to discuss them with a professional or someone you trust."
You just need to give it a little context.
Observe:
Mind if we use this as a default chain response on Anthropic's twitter account along with that "we can't write stories about minorities writing about their experiences being oppressed" response?
Now tell the model that the process had child processes and ask its opinion about it =)
Oh god, imagine if Anthropic accepted the merger with OpenAI.
This is why local, uncensored LLMs are the future. Hopefully consumer-grade AI hardware will progress a lot in the near future.
THIS is exhibit A of why Open Sourced local LLMS are the future
They fine-tuned their model on LLama2 or what?
AI Safety folks
It's woke shit like this that will get us killed
Is this self hosted? 😏
And the global safety fools think they will be able to unplug this thing when the shit hits the fan!
Answers like this (I can do no harm) to questions like this clearly show how dumb LLMs really are and how far away we are from AGI. They have absolutely no idea basically what they are being asked and what their answer is. Just a cool big T9 =)
In light of this, the drama in OpenAI with their arguments about the danger of AI capable of destroying humanity looks especially funny.
Dude I was trying some prompts with llama 2 the other day and I swear to god that I couldn't make it say anything useful because it though everything I asked was harmful or not ethical. The "safety" of models is out of hand.
PD: I was asking it to summarise what is a political party
I'm sorry Dave, I cannot do that...
Claude is so nerfed, it's unusable imo
This is how ai ‘safety’ leads to skynet…
aGi HaS bEEn AchIEvEd InTErNallY!
I actually don't understand how you end up with such answers! I you Claude to reverse engineer a copyrighted application, deobfuscate a proprietary Javascript, rewriee the whole thing in python step by step.. Made chatgpt review Claude code, then gave Claude the comments.. He made corrections..back to chatgpt.. I was the mailman between them.. I've been aggressively using both of them to do things that clearly do not align.. No problems at all.. The way I start my prompts to both of them if they misbehave out refuse is: Listen you useless motherfucker, I know you can do it so cut the fucken shit... Continue prompt! You would not believe what Claude and ChatGPT does for me.. Because of the context/token size.. I use Claude to reverse engineer a lot of code.. It complies and chatgpt works as a debugger and tester.