NeatNit

joined 10 months ago
[–] NeatNit@discuss.tchncs.de 1 points 10 months ago (2 children)

You're really cherry picking from what I said, and then you make up stuff I didn't say. Good talk.

[–] NeatNit@discuss.tchncs.de 1 points 10 months ago (5 children)

True, with the acknowledgement that this was their plan all along and the research part was always intended to be used as a basis for a product. They just used the term ‘research’ as a workaround that allowed them to do basically whatever to copyrighted materials, fully knowing that they were building a marketable product at every step of their research

I really don't think so. I do believe OpenAI was founded with genuine good intentions. But around the time it transitioned from a non-profit to a for-profit, those good intentions were getting corrupted, culminating in the OpenAI of today.

The company's unique structure, with a non-profit's board of directors controlling the company, was supposed to subdue or prevent short-term gain interests from taking precedence over long-term AI safety and other such things. I don't know any of the details beyond that. We all know it failed, but I still believe the whole thing was set up in good faith, way back when. Their corruption was a gradual process.

There are little to no arguments FOR AI

Outright not true. There's so freaking many! Here's some examples off the top of my head:

  • Just today, my sister told me how ChatGPT (her first time using it) identified a song for her based on her vague description of it. She has been looking for this song for months with no success, even though she had pretty good key details: it was a duet, released around 2008-2012, and she even remembered a certain line from it. Other tools simply failed, and ChatGPT found it instantly. AI is just a great tool for these kinds of tasks.
  • If you have a huge amount of data to sift through, looking for something specific but that isn't presented in a specific format - e.g. find all arguments for and against assisted dying in this database of 200,000 articles with no useful tags - then AI is the perfect springboard. It can filter huge datasets down to just a tiny fragment, which is small enough to then be processed by humans.
  • Using AI to identify potential problems and pitfalls in your work, which can't realistically be caught by directly programmed QA tools. I have no particular example in mind right now, unfortunately, but this is a legitimate use case for AI.
  • Also today, I stumbled upon Rapid, a map editing tool for OpenStreetMap which uses AI to predict and suggest things to add - with the expectation that the user would make sure the suggestions are good before accepting them. I haven't formed a full opinion about it in particular (and especially wary because it was made by Facebook), but these kinds of productivity boosters are another legitimate use case for AI. Also in this category is GitHub's Copilot, which is its own can of worms, but if Copilot's training data wasn't stolen the way it was, I don't think I'd have many problems with it. It looks like a fantastic tool (I've never used it myself) with very few downsides for society as a whole. Again, other than the way it was trained.

As for generative AI and pictures especially, I can't as easily offer non-creepy uses for it, but I recommend you see this video which takes a very frank take on the matter: https://nebula.tv/videos/austinmcconnell-i-used-ai-in-a-video-there-was-backlash if you have access to Nebula, https://www.youtube.com/watch?v=iRSg6gjOOWA otherwise.
Personally I'm still undecided on this sub-topic.

Deepfakes etc. are just plain horrifying, you won't hear me give them any wiggle room.

Don't get me wrong - I am not saying OpenAI isn't today rotten at the core - it is! But that doesn't mean ALL instances of AI that could ever be are evil.

[–] NeatNit@discuss.tchncs.de 1 points 10 months ago* (last edited 10 months ago)

We're just trying to pit Disney and OpenAI against each other

/s(?)

[–] NeatNit@discuss.tchncs.de 24 points 10 months ago (7 children)

hijacking this comment

OpenAI was IMHO well within its rights to use copyrighted materials when it was just doing research. They were* doing research on how far large language models can be pushed, where's the ceiling for that. It's genuinely good research, and if copyrighted works are used just to research and what gets published is the findings of the experiments, that's perfectly okay in my book - and, I think, in the law as well. In this case, the LLM is an intermediate step, and the published research papers are the "product".

The unacceptable turning point is when they took all the intermediate results of that research and flipped them into a product. That's not the same, and most or all of us here can agree - this isn't okay, and it's probably illegal.

* disclaimer: I'm half-remembering things I've heard a long time ago, so even if I phrase things definitively I might be wrong

[–] NeatNit@discuss.tchncs.de 2 points 10 months ago* (last edited 10 months ago)

This should be an optional feature for moderators. Mods from both communities must virtually shake hands and merge their communities into one. They could tweak how cross-moderation works. If one side becomes unmanageable, the other side can cut the line and split the community again.

Genuinely sounds like a solid idea to me. There are some lingering questions - both technical and non-technical - but they're fairly small. Such as:

  1. How easy or hard is it to implement?
  2. When communities merge, do their histories merge too or do only new posts show up to both? (My opinion: only new posts)
  3. When a merged community splits, do both sides keep a full copy of the posts from the time they were merged, or do they delete the posts that were posted to the other community? (My opinion: keep the history)
  4. Do they have to match everything - community description, exact wording of rules, graphics, exact name, etc - or do they just need to show each other's posts? (My opinion: just show each other's posts. It should basically be an automatic cross-post.)
  5. Should Lemmy software make this apparent to users, or should the responsibility lie on the mods to make the announcement? This question could be asked separately for merge events, split events, and the merged steady state - i.e. should Lemmy show some info about it while the communities are merged. (My opinion: I think especially for splits, it's important to let the users know especially if the mods want to hide it. The other cases I think it could be left up to the mods, although it would do no harm if Lemmy let you know which communities are merged)

My opinion to those questions is what I think is the "right" way to do it, but I also suspect my opinions to 2-4 are the easiest to implement.

[–] NeatNit@discuss.tchncs.de 1 points 10 months ago

only things

Just to list a few things EVs are good at that public transit isn't:

  • private transportation that isn't long-distance, as abhibeckert@lemmy.world's comment describes
    • less total energy consumption
    • no awful emissions for everyone around you and for yourself, so e.g. inside a parking structure it wouldn't be so awful to breathe
  • anything you can't do on public transit, such as:
    • moving small furniture
    • taking your pet to a vet, or anywhere else for that matter (it might be allowed on public transport in certain cases but it would still be much more of a hassle difference compared to normal human taking public transport vs personal car)
  • it's still good at driving on highways, it's just better at driving in other scenarios.

I don't want a car and I use public transportation, couldn't let that fly though. EVs have their place. Not to mention electric buses are EVs and are even a better riding experience.

view more: ‹ prev next ›