backgroundcow

joined 2 years ago
[–] backgroundcow@lemmy.world 5 points 19 hours ago* (last edited 19 hours ago)

If someone is trying to do the most good with their money, it seems logical to give via an organization that distributes the funds according to a plan. To instead hand out money to people closest at hand seems it could be motivated more by trying to make me feel good than to actually make a difference.

Furthermore, there are larger scale systemic issues. Begging takes up a lot of time. It becomes a problem if it pays someone enough to outcompete more productive use of time that could, in some cases, pay, and in other cases, at least be more useful: childcare/teaching kids, home maintenance, cooking, cleaning, etc. In contrast, state welfare programs and aid organizations usually do not condition help on that the receiver has to sit idle for long times to receive help. Add to this that begging really only works in crowded areas, which may limit the possibility to relocate somewhere where living might be more sustainable. Hence, in the worst case, handing out money to those who begs for it could actually add to the difficulty for people stuck in a very difficult situation to get out of it.

This "analysis" of course skips over the many, many individual circumstances that get people into a situation where begging seems the right choice. What we should be doing is investing public funds even heavier in social programs and other aids to (1) avoid as much as possible that people end up in these situations; and (2) get people out of these situations as effectively as possible.

[–] backgroundcow@lemmy.world 1 points 5 days ago

No shade on people trying to make sustainable choices, but if the solution to the climate crisis is us trusting everyone to "get with the program" and pick the right choice; while unsustainable alternatives sit right there beside them at lower prices, then we are truly doomed.

What the companies behind these foods and products don't want to talk about is that to get anywhere we have to target them. It shouldn't be a controversial standpoint that: (i) all products need to cover their true full environmental and sustainability costs, with the money going back into investments into the environment counteracting the negative impacts; (ii) we need to regulate, regulate, and regulate how companies are allowed to interact with the environment and society, and these limits must apply world-wide. There needs to be careful follow-up on that these rules are followed: with consequences for individuals that take the decisions to break them AND "death sentences" (i.e. complete disbandment) for whole companies that repeatedly oversteps.

[–] backgroundcow@lemmy.world 10 points 2 weeks ago (3 children)

What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

Prove to me that this isn't exactly how the human mind -- i.e., "real intelligence" -- works.

The challenge with asserting how "real" the intelligence-mimicking behavior of LLMs is, is not to convince us that it "just" is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn't relying on statistical models whose inner principles of working are, essentially, the same.

All these claims that human creativity is so outstanding that it "obviously" will never be recreated by deterministic statistical models that "only" interpolates into new contexts knowledge picked up from observation of human knowledge: I just don't see it.

What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity--held as one of the pinnacles of human intelligence--has clear connections to what came before. If you read Einstein's works he is actually very good at explaining how he worked it out in increments from models and ideas - "what happens with a meter stick in space", etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

To me, the argument in the linked article reads a bit as "LLM AI cannot be 'intelligence' because when I introspect I don't feel like a statistical machine". This seems about as sophisticated as the "I ain't no monkey!" counter- argument against evolution.

All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don't think we have anything to gain from the "I ain't no statistical machine" line of argument.

[–] backgroundcow@lemmy.world 9 points 2 weeks ago

That's perfect. You already know your lines!

[–] backgroundcow@lemmy.world 2 points 1 month ago

After having a lot of sysvinit experience, the transition to setting up my own systemd services has been brutal. What finally clicked for me was that I had this habit of building mini-services based on shellscripts; and systemd goes out of its way to deliberately break those: it wants a single stable process to monitor; and if it sniffs out that you are doing some sketchy things that forks in ways it disapproves of, it is going to shut the whole thing down.

[–] backgroundcow@lemmy.world 1 points 2 months ago

I very much understand wanting to have a say against our data being freely harvested for AI training. But this article's call for a general opt-out of interacting with AI seems a bit regressive. Many aspects of this and other discussions about the "AI revolution" remind me about the Mitchell and Web skit on the start of the bronze age: https://youtu.be/nyu4u3VZYaQ

[–] backgroundcow@lemmy.world 25 points 2 months ago

John Oliver had a segment on this that may help convince people that it is real: https://youtu.be/3kEpZWGgJks

[–] backgroundcow@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

These two are not interchangeable or really even comparable though?

For GNU Make, yes they are. These are fully comparable tools for writing sophisticated dynamic build systems. "Plain make", not so much.

[cmake] makes your build system much, much more robust, far easier to maintain, much more likely to work on other systems than your own, and far easier to integrate with other dependent projects.

This is absolutely incorrect. I assume (although I have never witnessed it) that a true master of cmake could use it to create a robust, maintainable, transferable build system. Very much like there are people who are able to make delicate ice sculptures using a chainsaw. But in no way does these properties follow from the choice of cmake as a build system (as insinuated in your post), rather, the word we are looking for here is: despite using cmake.

I apologize for my inflammatory language. I may just have a bit of PTSD from having to build a lot of other people's software through multiple layers of meta build systems. And cmake comes back, time and time again, as introducing loads of obstacles.

[–] backgroundcow@lemmy.world 23 points 2 months ago (3 children)
[–] backgroundcow@lemmy.world 19 points 2 months ago

Thanks for giving the link and making this an easy 1-click thing. Just donated.

[–] backgroundcow@lemmy.world 5 points 3 months ago (1 children)

On the topic of things to never forgive Redhat about, aren't there other things that are more pressing? Like, inventing a whole scheme to circumvent the idea of the GPL license via service contract blackmail?

view more: next ›