Posted on 06/22/2023 5:09:25 PM PDT by Twotone
Developers and business people need to better understand the legal implications of generative AI, a new study from Stanford University urges. That's because a wave of court cases may be coming, and the code, language, and images generated through AI may potentially be based on copyrighted material.
The question is, who gets the credit for generative AI output? The study suggests this is still a hazy area.
Also: Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?
Generative AI enables developers and business users to generate code and narratives at the push of a button. The bad news is that "most of the words and images in datasets behind artificial intelligent agents like ChatGPT and DALL-E are copyrighted," the paper's authors point out. "Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation."
The legal doctrine that provides cover for the use of limited segments of the code, narratives, and images is "fair use." However, the study's authors assert, "If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. Fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use."
AI and machine learning practitioners "aren't necessarily aware of the nuances of fair use," Peter Henderson, a co-author of the paper, pointed out in a related interview. "At the same time, the courts have ruled that certain high-profile real-world examples are not protected fair use, yet those very same examples look like things AI is putting out. There's uncertainty about how lawsuits will come out in this area."
The co-authors predict an upcoming wave of lawsuits stemming from uncompensated use of code and content with generative AI. If courts eventually rule that AI does not meet the criteria of fair use, it "could dramatically curtail how generative AI is trained and used. As AI tools continue to advance in capabilities and scale, they challenge the traditional understanding of fair use, which has been well-defined for news reporting, art, teaching, and more. New AI tools -- both their capability and scale -- complicate this definition."
The IP and copyright implications for code also need to be weighed. "Like in natural language text cases, in software cases, literal infringement -- verbatim copying -- is unlikely to be fair use when it comprises a large portion of the code base," the co-authors explain. "And when the amount copied is small, the overall product is sufficiently different from the original one, or the code is sufficiently transformative, then fair use may be indicated under current standards."
Such standards may be more permissive than those for text or music. "Functional aspects of code are not protected by copyright, meaning that copying larger segments of code verbatim might be allowed in cases where the same level of similarity would not be permissible for text or music. Nonetheless, for software generated by foundation models, the more the generated content can be transformed from the original structure, sequence, and organization, the better."
The report's co-authors recommend establishing technical guardrails. "Install fair use filters that try to determine when the generated work -- a chapter in the style of J.K. Rowling, for instance, or a song reminiscent of Taylor Swift -- is a little too much like the original and begins to infringe on fair use."
How could that be a problem. The owner of the AI owns its product, unless another agreement supersedes that.
Sounds like some sort of Full Employment Act for the lieyahs.
I believe it will depend on the AI application EULA.
Recognition and compensation for any licensed input data used for AI training should be paid based on output generated by the AI.
My company is struggling with this now. We’re told not to use any code it generates.
This feels like Napster all over again. You can say what you want but people are going to use it, even quietly.
Interesting question! If two people ask AI the same question, will they get the same answer? If not, who opens it?
China?
Libtards will put a leftist written AI in charge of everything and everyone
Watch and see it’s already happening
Don’t go down the Black Rock / Aladdin rabbit hole . . .
I think that they want every single person to try all the types of AI, photos, font, writing, etc, and then down the line “They” will seek forced compensation for usage or shut down any and all devices using anything remotely connected to their AI.
That makes a frightening amount of sense to me.
Could liken it to Monsanto:
If any of their GMO pollen lands on your heritage strain crops, then by golly you owe Monsanto big bucks.
It is so evil and devious that the masses will not even blink an eye while others, like us who are awake and see the light, know the death and enslavement that lies ahead.
The tooth fairy?
Indeed.
About every day, I give thanks that at least I am old and sans progeny.
If one uses a grammar-checker on a biographical article about, say, Isaac Newton, does the owner of the grammar-checker have a claim to the article?
Each AI system will eventually go to war with other AI systems.
The world will quickly realize that nothing that is viewed remotely can be trusted; that nothing written can be trusted.
Civilization will devolve to line-of-sight, room-based communication. Even phones won’t be trusted.
I do have some years left in me, I am in no hurry to step into the next dimension but I also do not want to live in an electronic bug eating slave world and will die fighting for my country.
There’s some data that most of these stories leave out. I’ve tinkered with an AI program that generates pictures. You have to describe what you want it to draw, then you have to refine it, tell it things that you want, things that you DON’T want, etc. It can get VERY technical!
So, considering THAT, I think the person who describes the picture would be the “author”, and should get rights.
My company is struggling with this now. We’re told not to use any code it generates.
= = =
Can you change a couple of lines, that have little or no effect on the performance?
So it is now YOUR code?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.