Shelena

joined 2 years ago
[–] Shelena@feddit.nl 3 points 1 year ago

I agree. I have one too and I am very happy with it. I also think it does not break as easily as most other phones. I drop it all the time and it is still fine, even the screen.

[–] Shelena@feddit.nl 0 points 1 year ago* (last edited 1 year ago) (1 children)

No, not at all. It made things worse. I really think it is very good that many people benefit from exercise. However, it can in some cases also harm your mental health. I think it is important for people to know this. The benefits of exercise are so well known, that the people who it is harmful for often are pressured into exercising anyway and made to feel like a failure if it does not benefit them. It took me a long time and a lot of pain to find this out. I want to tell my story in case someone is in the same boat as me.

Years ago I was feeling so bad I could not get out of bed for a couple of months. The psychologist I was seeing kept pressuring me to exercise. So, I tried it and I hated it. I really had a lot of trouble even doing the smallest things, like making food for myself or go to the supermarket. It all seemed like an impossible task. Now I had to spend the little energy I had to regulate myself to go to the gym or to run.

When I was exercising, it felt like genuine torture. I just hated every second. Afterwards, I would just feel extra tired and very sad about the pain Inhad been in and anxious about having to go again next time.

I was too timid to really stand up for myself and I did not want to fail at yet another thing. I thought it was just my fault and I just was too lazy and should be harder on myself. So, I tried to keep going, even though I could not sleep the night before and I went there crying. When I said something about it, the psychologist kept pressuring me to do it like it was some magic fix for everything. I just needed to do it often enough.

On my way to the gym, I started to wish more and more that I would be in an accident and get wounded so I did not have to go anymore. One time, on my way to the gym, I tripped and fell. I had a big bruise on my knee, but it was not bad enough to not have to exercise anymore. So, I sat on my knee on the bruise the whole night in the hope that it would get worse. It hurt, but it was not nearly as bad as exercising. When I told my psychologist she said that she could not help me if I self-harmed and I should go somewhere else. However, I was not self-harming to harm myself. I was actually protecting myself against something that was bad for me. I could not explain that at the time.

Years later, I went to a psychosomatic physiotherapist. In the years in between, I got the advise to exercise for my mental health numerous times. Each time I tried it, I failed. No matter how much I tried, it keep feeling like torture, my mood got worse and physically I did not improve at all. I always kept thinking that it was my fault and just not trying hard enough.

So, when I went to the new physiotherapist, I started out with telling him that I knew I should exercise and that I was stupid foe not doing so. He immediately stopped me and told me I should not exercise at all. He explained to me that when you exercise, your stress levels go up temporarily and then they go down and usually lower than they were before you started exercising. That is why most people benefit from stress reduction after exercise.

However, in my case, my stress levels were extremely high, all the time. They were so high that if I started to exercise, they would be pushed up above the maximum that my body was able to handle (he drew a chart where the line hit the top of the chart). So, for my body, exercise did not feel like a temporary increase in stress that would go down after a while, it felt like an extreme emergency situation that it could not adapt to. This would further disregulate my stress system. That is why it felt like torture, and why my mood got worse and why I did not have any physical improvement from exercise.

He told me moving was good to calm my nervous system. So, slow walking in the forest and things like that. And just quit as soon as I did not feel like it, or it gave me stress and just try some other time when I felt like it again. And that worked like a charm. I walk now for 4 to 6 hours a week and it calms me down. I do not have to push myself. I just feel like doing it and if I don't, I just won't go.

So, the point is that exercise can be great to help with stress, if your stress is maybe at 70% or 80%. However, if your stress level is consistently at 95% then it is harmful and you should not do it. (Mindfulness probably will not help either in that case btw.) If exercise keeps feeling like torture and it does not help you, do not feel like a failure and keep torturing yourself. It is not your fault if it does not work for you! Go to a psychosomatic therapist instead that has expertise in stress management. They might be able to help you.

[–] Shelena@feddit.nl 15 points 1 year ago

Thanks! Was looking for something more privacy focused. I had not found this one.

[–] Shelena@feddit.nl 8 points 1 year ago

I think Fairphone is pretty repairable. It is also quite durable. I have had it for years now and dropped it very often, but it hasn't broken yet.

[–] Shelena@feddit.nl 16 points 1 year ago* (last edited 1 year ago)

I actually agree with this. This technology should be open. I know that there are arguments to keep it closed, like it could be misused, etc. However, I think that all the scary stories about AI are also a way to keep attention away from the fact that if you have a monopoly on it, you have enormous power. This power will grow when the tech is used more and more. If all this power is in the hands of a commercial business (even though they say they aren't), then you know AI is going to be misused to gain money. We do not have clear insight in what they are doing and we have no reason to trust them.

You also know that bad actors, like dictatorial governments will eventually get or develop the technology themselves. So, keeping it closed is not a good way to protect it from that happening. At the same time, you are also keeping it from researchers who could investigate how to use and develop it further to be used responsibly and to the benefit of humanity.

Also, they relied on data generated by people in society who never got any payment or anything for that. So, it is immoral to not share the results with that same people in society openly and instead keeping it closed. I know they used some of my papers. However, I am not allowed to study their model. Seems unfair.

The dangers of AI should be kept at bay using regulation and enforcement by democratically chosen governments, not by commercial businesses or other non-democratic organisations.

[–] Shelena@feddit.nl 3 points 1 year ago (1 children)

I think they have instructions on the website on how to unlock the bootloader etc. There is also a lot on how they support open source with their own OS. I think that your warranty also remains valid after you unlock the bootloader and install another OS, as long as you revert to theirs when asking for support. I can sortof understand that, as it would not be feasible to support all sorts of custom ROMs.

[–] Shelena@feddit.nl 15 points 1 year ago (3 children)

I can definitely recommend getting a Fairphone. I quite happy with my Fairphone 4. Bloatware is limited to Google stuff and they even give instructions how to easily install a custom ROM (have not tried that yet though).

The specs are not great, but good enough for me. But the main advantage for me is that it does not break that easily. I drop my phone all the time. My Samsung phones and Pixel phone I have broken within the first few weeks. Usually I dropped it and the screen cracked, even with a protected case.

I have had this phone for a lot longer now (maybe years by now) and I dropped it like a 1000 times and it is still fine. The screen has not cracked, it still works. Only the side is a little chipped. I don't even use a protective case. And even if it breaks, I can just buy the broken component from their website and easily replace myself using normal tools. So that is really nice.

[–] Shelena@feddit.nl 1 points 2 years ago

Yes, definitely. I think a lot of the effort to develop technologies that are used for things such as more effective advertisements etc. should be redirected towards developing technologies that help humanity. However, it ys difficult to get there.

I myself spend a lot of effort and time on getting grants to research this kind of technology and it is almost impossible to do so. I am now looking for ways to make enough money so I can take time off and do it in my own time. Each time we get the reply that the ideas are great, but we get rejected for political reasons (e.g., we did not obtain enough money from business, our partners are too much from the same region). It is really frustrating.

[–] Shelena@feddit.nl 4 points 2 years ago (2 children)

Maybe the not knowing what will happen exactly is the most scary part. It also seems to be more and more inevitable. I feel like curling up into a ball and crying too. And that is fine sometimes I think. Sometimes you need to do that for a while.

But after that, we should get up. We should not lose our will to fight and deal with this. Whatever happens, everyone, ordinary people, can still fight to keep their compassion and humanity. I think that is how we can survive the most difficult circumstances.

I think as a species, that is why we were initially so successful, because we were able to cooperate, share and take care of each other. I think things are going wrong now because we, and especially the people in power, have lost that. It is not part of the capitalistic model. People treated with compassion will usually be more compassionate. Radical compassion, even when it is difficult, will lead to the collapse of this model and our survival.

[–] Shelena@feddit.nl 1 points 2 years ago

I agree we need a definition. But there always has been disagreement about what definition should be used (as is the case with almost anything in most fields of science). There traditionally have been four types of definitions of (artificial) intelligence, if I remember correctly they are: thinking like a human, thinking rationally, behaving like a human, behaving rationally. I remember having to write an essay for my studies about it and ending it with saying that we should not aim to create AI that thinks like a human, because there are more fun ways to create new humans. ;-)

I think the new LLMs will pass most forms of the Turing test and are thus able to behave like a human. According to Turing, we should therefore assume that they are conscious, as we do the same for humans, based on their behaviour. And I think he has a point from a rational point of view, although it seems very counterintuitive to give ChatGPT rights.

I think the definitions fitting in the category of behaving rationally always had the largest following, as it allows for rationality that is different from human's. And then, of course, rationality often is ill-defined. I am not sure whether the goal posts have been changed as this was the dominant idea for a long time.

There used to be a lot of discussion about whether we should focus on developing weak AI (narrow, performance on a single or few tasks) or strong AI (broad, performance on a wide range of tasks). I think right now, the focus is mainly on strong AI and it has been renamed to Artificial General Intelligence.

Scientists, and everyone else, have always been bad at predicting what will happen in the future. In addition, disagreement about what will be possible and when always has been at the center of the discussions in the field. However, if you look at the dominant ideas of what AI can do and in what time frame, it is not always the case that researchers underestimate developments. I started studying AI in 2006 (I feel really old now) and based on my experience, I agree with you the the technological developments often are underestimated. However, the impact of AI on society seems to be continuously overestimated.

I remember that at the beginning of my studies there was a lot of talk about automated reasoning systems being able to do diagnosis better than doctors and therefore that they would replace them. Doctors would have only a very minor role as a human would need to take responsibility, but that was that. When I go to my doctor, that still has not happened. This is just an example. But the benefits and dangers of AI have been discussed from the beginning of the field and what you see in practice is that the role of AI has grown, but is still much, much smaller than in practice.

I think the liquid neural networks are very neat and useful. However, they are still neural networks. It is still an adaptation of the same technology, with the same issues. I mean, you can get an image recognition system off the rails by just showing an image with a few specific pixels changed. The issue is that it is purely pattern-based. These lack an basic understanding of concepts that humans have. This type of understanding is closer to what is developed in the field of symbolic AI, which has really fallen out of fashion. However, if we could combine them, we could really make some new advancements, I believe. Not just adaptations of what we already have, but a new type of system that really can go beyond what LLMs do right now. Attempts to do so have been made, but they have not been really successful. If this happens and the results are as big as I expect, maybe I will start to worry.

As for the rights of AI, I believe that researchers and other developers of AI should be very vocal about this, to make sure the public understands this. This might put pressure on the people in power. It might help if people experience behaviour of AI that suggests consciousness, or even if we let AI speak for itself.

We should not just try to control the AI. I mean, if you have a child, you do not teach it how to become a good human by just controlling it all the time. It will not learn to control itself and it will likely follow your example of being controlling. We will need to be kind to it, to teach it kindness. We need to be the same towards the AI, I believe. And just like a child that does not have emotions might behave like a psychopath, AI without emotions might as well. So we need to find a way to make it have emotions as well. There has been some work on that also, but also very limited.

I think the focus is still too much only on ML for AGI to be created.

[–] Shelena@feddit.nl 2 points 2 years ago (2 children)

Why do you think it will be within 5 years? I mean, we just had a spurt in growth of AI due to the creation of LLMs with a lot more data and parameters. They are impressive, but the algorithms behind it are still quite close to the ML algorithms that were created in the 60s. They are optimised etc and we now have deep learning, but there has not been a major change or advancement of technology. For example, ChatGPT seems very smart, but it is just a very fancy parrot, not close to general intelligence.

I think the next step will be the combining of ML and symbolic AI. Both have their own strengths and being able to effectively combine them might lead to a higher level of intelligence. There could also be a role for emotions in certain types of intelligence. I do not think we really know how to integrate that as well.

I do not think we can do this in 5 years. That will be decades, at least. And once we can, we have a new problem. Because there is the issue that the AI might have consciousness. If we cannot be sure and it seems conscious, then we should give it rights, like we should for any conscious being. Right now, everyone is focussing on controlling the AI. However, if it is conscious, that is immoral. You are creating new slaves. In that case, we should either not make it, or integrate it in society in a way that respects human rights as well as the rights of the AI.

[–] Shelena@feddit.nl 1 points 2 years ago

Well, maybe they were and I guessed wrong. ;-)

view more: ‹ prev next ›