this post was submitted on 21 Sep 2024
84 points (71.4% liked)

Technology

60112 readers
1966 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

you are viewing a single comment's thread
view the rest of the comments
[–] tabular@lemmy.world 10 points 3 months ago (1 children)

Be it a complicated neural network or database matters not. It output portions of the code used as input by design.

If you can take GPL code and "not" distribute it via complicated maths then that circumvents it. That won't do, friendo.

[–] simplymath@lemmy.world 2 points 3 months ago (1 children)

For example, if I ask it to produce python code for addition, which GPL'd library is it drawing from?

I think it's clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

I'm not trying to be obtuse-- I'm an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to "store" data is a bit less clear than copy/pasting code wholesale.

would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they're using loss-y compression to "learn" the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published "forced forgetting" methods.

Then, that raises a further question.

If I as an academic researcher wanted to make a model that writes code using GPL'd training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.

[–] tabular@lemmy.world 3 points 3 months ago* (last edited 3 months ago) (1 children)

The corresponding training data is the best bet to see what code an input might be copied from. This can apply to humans too. To avoid lawsuits reverse engineering projects use a clean room strategy: requiring contributors to have never seen the original code. This is to argue they can't possibility be copying, even from memory (an imperfect compression too.

If it doesn't include GPL code then that can't violate the GPL. However, OpenAI argue they have to use copyrighted works to make specific AIs (if I recall correctly). Even if legal, that's still a problem to me.

My understanding is AI generated media can't be copyrighted as it wasn't a person being creative - like the monkey selfie copyright dispute.

[–] simplymath@lemmy.world 1 points 3 months ago (1 children)

Yeah. I'm thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that's categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will "see" the source code, which is a bit like a "clean room" from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I'm not sure that line is so clear.

I don't think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft's Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won't hold up in court forever, but Microsoft's lawyers seem to think it's a bit more nuanced than "this output can't be copyrighted". If it's not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I'm skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

Again, I think it's clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.

[–] tabular@lemmy.world 1 points 3 months ago

The human brain is compartmentised: you can damage a part and lose the ability to recognizes faces, or name tools. Presumably it can be seen as multi-stage too but would that be a defense? All we can do is look for evidence of copyright infringement in the output, or circumstantial evidence in the input.

I'm not sure the creativity of writing a prompt means you were creative for creating the output. Even if it appears your position is legal you can still lose in court. I think Microsoft is hedging their bets that there will be president to validate their claim of copyright.

There are a few Creative Commons licenses but most actually don't prevent commercial use (the ShareAlike is like the copyleft in GPL for code). Even if the media output was public domain and others are free to copy/redistribute that doesn't prevent an author selling public domain works (just harder). Code that is public domain isn't easily copied as the software is usually shared without it as a binary file.