Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.
How can they prove that not some abstract public data has been used to train algorithms, but their particular intellectual property?
Well, if you ask e.g. ChatGPT for the lyrics to a song or page after page of a book, and it spits them out 1:1 correct, you could assume that it must have had access to the original.
Or at least excerpts from it. But even then, it’s one thing for a person to put up a quote from their favourite book on their blog, and a completely different thing for a private company to use that data to train a model, and then sell it.
Even more so, if you consider that the LLMs are marketed to replace the authors.
Yeah which I still feel is utterly ridiculous. I love the idea of AI tools to assist with things, but as a complete replacement? No thank you.
I enjoy using things like SynthesizerV and VOCALOID because my own voice is pretty meh and my singing skills aren’t there. It’s fun to explore the voices, and learn how to use the tools. That doesn’t mean I’d like to see all singers replaced with synthesized versions. I view SynthV and the like as instruments, not much more.
I’ve used LLVMs to proofread stuff, and help me rephrase letters and such, but I’d never hire an editor to do such small tasks for me anyway. The result has always required editing anyway, because the LLVMs have a tendency to make stuff up.
Cases like that I don’t see a huge problem with. At my workplace though they’re talking about generating entire application layouts and codebases with AI and, being in charge of the AI evaluation project, the tech just isn’t there yet. You can in a sense use AI to make entire projects, but it’ll generate gnarly unmaintainable rubbish. You need a human hand in there to guide it.
Otherwise you end up with garbage websites with endlessly generated AI content, that can easily be manipulated by third party actors.
Can it recreate anything 1:1? When both my wife and I tried to get them to do that they would refuse, and if pushed they would fail horribly.
This is what I got. Looks pretty 1:1 for me.
Hilarious that it started with just “Buddy”, like you’d be happy with only the first word.
Yeah, for some reason it does that a lot when I ask it for copyrighted stuff.
As if it knew it wasn’t supposed to output that.
To be fair you’d get the same result easier by just googling “we will rock you lyrics”
How is chatgpt knowing the lyrics to that song different from a website that just tells you the lyrics of the song?
Two points:
-
Google spitting out the lyrics isn’t ok from a copyright standpoint either. The reason why songwriters/singers/music companies don’t sue people who publish lyrics (even though they totally could) is because no damages. They sell music, so the lyrics being published for free doesn’t hurt their music business and it also doesn’t hurt their songwriting business. Other types of copyright infringement that musicians/music companies care about are heavily policed, also on Google.
-
Content generation AI has a different use case, and it could totally hurt both of these businesses. My test from above that got it to spit out the lyrics verbatim shows, that the AI did indeed use copyrighted works for it’s training. Now I can ask GPT to generate lyrics in the style of Queen, and it will basically perform the song texter’s job. This can easily be done on a commercial scale, replacing the very human that has written these song texts. Now take this a step further and take a voice-generating AI (of which there are many), which was similarly trained on copyrighted audio samples of Freddie Mercury. Then add to the mix a music-generating AI, also fed with works of Queen, and now you have a machine capable of generating fake Queen songs based directly on Queen’s works. You can do the very same with other types of media as well.
And this is where the real conflict comes from.
-
you could assume that it must have had access to the original.
I don’t know if that’s true. If Google grabs that book from a pirate site. Then publishes the work as search results. ChatGPT grabs the work from Google results and cobbles it back together as the original.
Who’s at fault?
I don’t think it’s a straight forward ChatGPT can reproduce the work therefore it stole it.
Both are at fault: Google for distributing pirated material and OpenAI for using said material for financial gain.
Copyright doesn’t work like that. Say I sell you the rights to Thriller by Michael Jackson. You might not know that I don’t have the rights. But even if you bought the rights from me, whoever actually has the rights is totally in their legal right to sue you, because you never actually purchased any rights.
So if ChatGPT ripps it off Google who ripped it off a pirate site, then everyone in that chain who reproduced copyrighted works without permission from the copyright owners is liable for the damages caused by their unpermitted reproduction.
It’s literally the same as downloading something from a pirate site doesn’t make it legal, just because someone ripped it before you.
That’s a terrible example because under copyright law downloading a pirated thing isn’t actually illegal. It’s the distribution that is illegal (uploading).
Yes, downloading is illegal, and the media is still an illegally obtained copy. It’s just never prosecuted, because the damages are miniscule if you just download. They can only fine you for the amount of damages you caused by violating the copyright.
If you upload to 10k people, they can claim that everyone of them would have paid for it, so the damages are (if one copy is worth €30) ~€300k. That’s a lot of money and totally worth the lawsuit.
On the other hand, if you just download, the damages are just the value of one copy (in this case €30). That’s so miniscule, that even having a lawyer write a letter is more expensive.
But that’s totally besides the point. OpenAI didn’t just download, they replicate. Which is causing massive damages, especially to the original artists, which in many cases are now not hired any more, since ChatGPT replaces them.
there are a lot of possible ways to audit an AI for copyrighted works, several of which have been proposed in the comments here, but what this could lead to is laws requiring an accounting log of all material that has been used to train an AI as well as all copyrights and compensation, etc.
Not without some seriously invasive warrants! Ones that will never be granted for an intellectual property case.
Intellectual property is an outdated concept. It used to exist so wealthier outfits couldn’t copy your work at scale and muscle you out of an industry you were championing.
It simply does not work the way it was intended. As technology spreads, the barrier for entry into most industries wherein intellectual property is important has been all but demolished.
i.e. 50 years ago: your song that your band performed is great. I have a recording studio and am gonna steal it muahahaha.
Today: “anyone have an audio interface I can borrow so my band can record, mix, master, and release this track?”
Intellectual property ignores the fact that, idk, Issac Newton and Gottfried Wilhelm Leibniz both independently invented calculus at the same time on opposite ends of a disconnected globe. That is to say, intellectual property doesn’t exist.
Ever opened a post to make a witty comment to find someone else already made the same witty comment? Yeah. It’s like that.
Spoken by someone who has never had something you’ve worked years on, be stolen.
What was “stolen” from you and how?
Spoken like someone who is having trouble admitting they’re standing on the shoulders of Giants.
I don’t expect a nuanced response from you, nor will I waste time with folks who can’t be bothered to respond in any form beyond attack, nor do I expect you to watch this
Intellectual property died with the advent of the internet. It’s now just a way for the wealthy to remain wealthy.
Here is an alternative Piped link(s): https://piped.video/PJSTFzhs1O4
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
deleted by creator
I think you said this facetiously… but it literally is.
https://www.howtogeek.com/310158/are-other-people-allowed-to-use-my-tweets/
deleted by creator
Copyright isn’t Twitter rules…
deleted by creator
Personally speaking, I’ve generated some stupid images like different cities covered in baked beans and have had crude watermarks generate with them where they were decipherable enough that I could find some of the source images used to train the ai. When it comes to photo realistic image generation, if all the ai does is mildly tweak the watermark then it’s not too hard to trace back.
All but a very small few generative AI programs use completely destructive methods to create their models. There is no way to recover the training images outside of infantesimally small random chance.
What you are seeing is the AI recognising that images of the sort you are asking for generally include watermarks, and creating one of its own.
Do you have examples? It should only happen in case of overfitting, i.e. too many identical image for the same subject
Here’s one I generated and an image from the photographer. Prompt was Charleston SC covered in baked beans lol
Out of curiosity what model did you use?
I’d think that given the nature of the language models and how the whole AI thing tends to work, an author can pluck a unique sentence from one of their works, ask AI to write something about that, and if AI somehow ‘magically’ writes out an entire paragraph or even chapter of the author’s original work, well tada, AI ripped them off.
I think that to protect creators they either need to be transparent about all content used to train the AI (highly unlikely) or have a disclaimer of liability, wherein if original content has been used is training of AI then the Original Content creator who have standing for legal action.
The only other alternative would be to insure that the AI specifically avoid copyright or trademarked content going back to a certain date.
Why a certain date? That feels arbitrary
At a certain age some media becomes public domain
Then it is no longer copywrited
They can’t. All they could prove is that their work is part of a dataset that still exists.
There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.
There are plagiarism and copyright laws to protect the output of these tools: if the output is infringing, then sue them. However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.
When you sell a book, you don’t get to control how that book is used. You can’t tell me that I can’t quote your book (within fair use restrictions). You can’t tell me that I can’t refer to your book in a blog post. You can’t dictate who may and may not read a book. You can’t tell me that I can’t give a book to a friend. Or an enemy. Or an anarchist.
Folks, this isn’t a new problem, and it doesn’t need new laws.
It’s 100% a new problem. There’s established precedent for things costing different amounts depending on their intended use.
For example, buying a consumer copy of song doesn’t give you the right to play that song in a stadium or a restaurant.
Training an entire AI to make potentially an infinite number of derived works from your work is 100% worthy of requiring a special agreement. This even goes beyond simple payment to consent; a climate expert might not want their work in an AI which might severely mischatacterize the conclusions, or might want to require that certain queries are regularly checked by a human, etc
Well, fine, and I can’t fault new published material having a “no AI” clause in its term of service. But that doesn’t mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.
Fortunately the “horses out the barn” effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause - basically like the knowledge cutoff, but forever. It’s untenable, OpenAI will be forced to cave and pay up.
OpenAI and such being forced to pay a share seems far from the worst scenario I can imagine. I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don’t want their creations to be used by others for commercial purposes. That could really mean that large parts of humanity would be cut off from knowledge.
I can well imagine copyleft gaining importance in this context. But this form of licencing seems pretty worthless to me if you don’t have the time or resources to sue for your rights - or even to deal with the various forms of licencing you need to know about to do so.
I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don’t want their creations to be used by others for commercial purposes.
None of them are forced to stop making their works freely available. If they want to voluntarily stop making their works freely available to prevent commercial interests from using them, that’s on them.
Besides, that’s not so bad to me. The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.
That could really mean that large parts of humanity would be cut off from knowledge.
On the contrary, AI is making knowledge more accessible than ever before to large parts of humanity. The only comparible other technologies that have done this in recent times are the internet and search engines. Thank goodness the internet enables piracy that allows anyone to download troves of ebooks for free. I look forward to AI doing the same on an even greater scale.
Shouldn’t there be a way to freely share your works without having to expect an AI to train on them and then be able to spit them back out elsewhere without attribution?
No, there shouldn’t because that would imply restricting what I can do with the information I have access to. I am in favor of maintaining the sort of unrestricted general computing that we already have access to.
The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.
You’re not talking about sharing it with humanity, you’re talking about feeding it into an AI. How is this holding back the creative potential of humanity? Again, you’re talking about feeding and training a computer with this material.
Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.
Sure it can. Just because it is a new law doesn’t mean they get to continue benefiting from IP ‘theft’ forever into the future.
Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause
How is this an issue for the IP holders? Just because you build something cool or useful doesn’t mean you get a pass to do what you want.
basically like the knowledge cutoff, but forever. It’s untenable,
Untenable for ChatGPT maybe, but it’s not as if it’s the end of ‘knowledge’ or the end of AI. It’s just a single company product.
My point is that the restrictions can’t go on the input, it has to go on the output - and we already have laws that govern such derivative works (or reuse / rebroadcast).
The thing is, copyright isn’t really well-suited to the task, because copyright concerns itself with who gets to, well, make copies. Training an AI model isn’t really making a copy of that work. It’s transformative.
Should there be some kind of new model of renumeration for creators? Probably. But it should be a compulsory licensing model.
The slippery slope here is that we are currently considering humans and computers to be different because (something someone needs to actually define). If you say “AI read my book and output a similar story, you owe me money” then how is that different from “Joe read my book and wrote a similar story, you owe me money.” We have laws already that deal with this but honestly how many books and movies aren’t just remakes of Romeo and Juliet or Taming of the Shrew?!?
If you say “AI read my book and output a similar story, you owe me money” then how is that different from “Joe read my book and wrote a similar story, you owe me money.”
You’re bounded by the limits of your flesh. AI is not. The $12 you spent buying a book at Barns & Noble was based on the economy of scarcity that your human abilities constrain you to.
It’s hard to say that the value proposition is the same for human vs AI.
We are making an assumption that humans do “human things”. If i wrote a derivative work of your $12 book, does it matter that the way i wrote it was to use a pen and paper and create a statistical analysis of your work and find the “next best word” until i had a story? Sure my book took 30 years to write but if i followed the same math as an AI would that matter?
It’s not even looking for the next best word. It’s looking for the next best token. It doesn’t know what words are. It reads tokens.
Good point.
I could easily see laws created where they blanket outlaw computer generated output derived from other human created data sets and sudden medical and technical advancements stop because the laws were written by people who don’t understand what is going on.
It wouldn’t matter, because derivative works require permission. But I don’t think anyone’s really made a compelling case that OpenAI is actually making directly derivative work.
The stronger argument is that LLM’s are making transformational work, which is normally fair use, but should still require some form of compensation given the scale of it.
But no one is complaining about publishing derived work. The issue is that “the robot brain has full copies of my text and anything it creates ‘cannot be transformative’”. This doesn’t make sense to me because my brain made a copy of your book too, its just really lossy.
I think right now we have definitions for the types of works that only loosely fit human actions mostly because we make poor assumptions of how the human brain works. We often look at intent as a guide which doesn’t always work in an AI scenario.
Well, Shakespeare has beed dead for a few years now, there’s no copyright to speak of.
And if you make a book based on an existing one, then you totally need permission from the author. You can’t just e.g. make a Harry Potter 8.
But AIs are more than happy to do exacly that. Or to even reproduce copyrighted works 1:1, or only with a few mistakes.
If a person writes a fanfic harry potter 8 it isn’t a problem until they try to sell it or distribute it widely. I think where the legal issues get sticky here are who caused a particular AI generated Harry Potter 8 to be written.
If the AI model attempts to block this behavior. With contract stipulations and guardrails. And if it isn’t advertised as “a harry potter generator” but instead as a general purpose tool… then reasonably the legal liability might be on the user that decides to do this or not. Vs the tool that makes such behavior possible.
Hypothetically what if an AI was trained up that never read Harry Potter. But its pretty darn capable and I feed into it the entire Harry Potter novel(s) as context in my prompt and then ask it to generate an eighth story — is the tool at fault or am I?
Fanfic can actually be a legal problem. It’s usually not prosecuted, because it harms the brand to do so, but if a company was doing that professionally, they’d get into serious hot water.
Regarding your hypothetical scenario: If you train the AI with copyrighted works, so that you can make it reproduce HP8, then you are at fault.
If the tool was trained with HP books and you just ask really nicely to circumvent the protections, I would guess the tool (=> it’s creators) would certainly be at fault (since it did train on copyrighted material and the protections were obviously not good enough), and at the latest when you reproduce the output, you too are.
It seems like people are afraid that AI can do it when i can do it too. But their reason for freaking out is…??? It’s not like AI is calling up publishers trying to get Harry Potter 8 published. If i ask it to create Harry Potter 1 but change his name to Gary Trotter it’s not the AI that is doing something bad, it’s me.
That was my point. I can memorize text and its only when I play it off as my own that it’s wrong. No one cares that I memorized the first chapter and can recite it if I’m not trying to steal it.
That’s not correct. The issue is not whether you play it off as your own, but how much the damages are that you can be sued for. If you recite something that you memorized in front of a handful of friends, the damages are non-existant and hence there is no point in sueing you.
But if you give a large commercial concert and perform a cover song without permission, you will get sued, no matter if you say “This song is from <insert original artist> and not from me”, because it’s not about giving credit, it’s about money.
And regarding getting something published: This is not so much about big name art like Harry Potter, but more about people doing smaller work. For example, voice actors (both for movie translations and smaller things like announcements in public transport) are now routinely replaced by AI that was trained on their own voices without their permission.
Similar story with e.g. people who write texts for homepages and ad material. Stuff like that. And that has real-world consequences already now.
The issue is not whether you play it off as your own, but how much the damages are that you can be sued for.
I think that’s one in the same. I’m just not seeing the damages here because the output of the AI doesn’t go any further than being AI output without a further human act. Authors are idiots if they claim “well someone could ask ChatGPT to output my entire book and you could read it for free.” If you want to go after that type of crime then have ChatGPT report the users asking for it. If your book is accessible via a library I’m not see any difference between you asking ChatGPT to write in someone’s style and asking me to write in their style. If you ask ChatGPT for lines verbatim i can recite them too. I don’t know what legitimate damages they are claiming.
For example, voice actors
I think this is a great example but again i feel like the law is not only lacking but would need to outlaw other human acts not currently considered illegal.
If you do impressions you’re mimicking the tone, cadence and selection of language someone else does. You arent recording them and playing back the recording, you are using your own voice box to create a sound similar to the celebrity. An AI sound generator isn’t playing back a recording either. It’s measuring tone, cadence, and language used and creates a new sound similar to the celebrity. The only difference here is that the AI would be more precise than a humans ability to use their voice.
Copyright also deals with derivative works.
Derivative and transformative are quite different though.
Challenge level impossible: try uploading something long to amazon written by chatgpt without triggering the plagiarism detector.
When you sell a book, you don’t get to control how that book is used.
This is demonstrably wrong. You cannot buy a book, and then go use it to print your own copies for sale. You cannot use it as a script for a commercial movie. You cannot go publish a sequel to it.
Now please just try to tell me that AI training is specifically covered by fair use and satire case law. Spoiler: you can’t.
This is a novel (pun intended) problem space and deserves to be discussed and decided, like everything else. So yeah, your cavalier dismissal is cavalierly dismissed.
I completely fail to see how it wouldn’t be considered transformative work
It fails the transcendence criterion.Transformative works go beyond the original purpose of their source material to produce a whole new category of thing or benefit that would otherwise not be available.
Taking 1000 fan paintings of Sauron and using them in combination to create 1 new painting of Sauron in no way transcends the original purpose of the source material. The AI painting of Sauron isn’t some new and different thing. It’s an entirely mechanical iteration on its input material. In fact the derived work competes directly with the source material which should show that it’s not transcendent.
We can disagree on this and still agree that it’s debatable and should be decided in court. The person above that I’m responding to just wants to say “bah!” and dismiss the whole thing. If we can litigate the issue right here, a bar I believe this thread has already met, then judges and lawmakers should litigate it in our institutions. After all the potential scale of this far reaching issue is enormous. I think it’s incredibly irresponsible to say feh nothing new here move on.
Being able to dialog with a book, even to the point of asking the AI to “take on the persona of a character in the book” and support ongoing is substantively a transcendent version of the original. That one can, as a small subset of that transformed version, get quotes from the original work feels like a small part of this new work.
If this had been released for a single work. Like, “here is a star wars AI that can take on the persona of star wars characters” and answer questions about the star wars universe etc. I think its more likely that the position I’m taking here would lose the debate. But this is transformative against the entire set of prior material from books, movies, film, debate, art, science, philosophy etc. It merges and combines all of that. I think the sheer scope of this new thing supports the idea that its truly transformative.
A possible compromise would be to tax AI and use the proceeds to fund a UBI initiative. True, we’d get to argue if high profile authors with IP that catches the public’s attention should get more than just blogger or a random online contributor – but the basic path is that AI is trained on and succeeds by standing on the shoulders of all people. So all people should get some benefits.
I do think you have a point here, but I don’t agree with the example. If a fan creates the 1001 fan painting after looking at others, that might be quite similar if they miss the artistic quality to express their unique views. And it also competes with their source, yet it’s generally accepted.
Typically the argument has been “a robot can’t make transformative works because it’s a robot.” People think our brains are special when in reality they are just really lossy.
Even if you buy that premise, the output of the robot is only superficially similar to the work it was trained on, so no copyright infringement there, and the training process itself is done by humans, and it takes some tortured logic to deny the technology’s transformative nature
Oh i think those people are wrong, but we tend to get laws based on people who don’t understand a topic deciding how it should work.
Go ask ChatGPT for the lyrics of a song and then tell me, that’s transformative work when it outputs the exact lyrics.
Go ask a human for the lyrics of a song and then tell me that’s transformative work.
Oh wait, no one would say that. This is why the discussion with non-technical people goes into the weeds.
Because it would be totally clear to anyone that reciting the lyrics of a song is not a transformative work, but instead covered by copyright.
The only reason why you can legally do it, is because you are not big enough to be worth suing.
Try singing a copyrighted song in TV.
For example, until it became clear that Warner/Chappell didn’t actually own the rights to “Happy Birthday To You”, they’d sue anyone who sung that song in any kind of broadcast or other big public thing.
Quote from Wikipedia:
The company continued to insist that one cannot sing the “Happy Birthday to You” lyrics for profit without paying royalties; in 2008, Warner collected about US$5,000 per day (US$2 million per year) in royalties for the song. Warner/Chappell claimed copyright for every use in film, television, radio, and anywhere open to the public, and for any group where a substantial number of those in attendance were not family or friends of the performer.
So if a human isn’t allowed to reproduce copyrighted works in a commercial fashion, what would make you think that a computer reproducing copyrighted works would be ok?
And regarding derivative works:
Check out Vanilla Ice vs Queen. Vanilla Ice just used 7 notes from the Queen song “Under Pressure” in his song “Ice Ice Baby”.
That was enough that he had to pay royalties for that.
So if a human has to pay for “borrowing” seven notes from a copyrighted work, why would a computer not have to?
Well, they’re fixing that now. I just asked chatgpt to tell me the lyrics to stairway to heaven and it replied with a brief description of who wrote it and when, then said here are the lyrics: It stopped 3 words into the lyrics.
In theory as long as it isn’t outputting the exact copyrighted material, then all output should be fair use. The fact that it has knowledge of the entire copyrighted material isn’t that different from a human having read it, assuming it was read legally.
This feels like a solution to a non-problem. When a person asks the AI “give me X copyrighted text” no one should be expecting this to be new works. Why is asking ChatGPT for lyrics bad while asking a human ok?
Try it again and when it stops after a few words, just say “continue”. Do that a few times and it will spit out the whole lyrics.
It’s also a copyright violation if a human reproduces memorized copyrighted material in a commercial setting.
If, for example, I give a concert and play all of Nirvana’s songs without a license to do so, I am still violating the copyright even if I totally memorized all the lyrics and the sheet music.
Transformativeness is only one of the four fair use factors. Just because something is transformative can’t alone make something fair use.
Even if AI is transformative, it would likely fail on the third factor. Fair use requires you to take the minimum amount of the copyrighted work, and AI companies scrape as much data as possible to train their models. Very unlikely to support a finding of fair use.
The final factor is market impact. As generative AIs are built to mimic the creativite outputs of human authorship. By design AI acts as a market replacement for human authorship so it would likely fail on this factor as well.
Regardless, trained AI models are unlikely to be copyrightable. Copyrights require human authorship which is why AI and animal generated art are not copyrightable.
A trained AI model is a piece of software so it should be protectable by patents because it is functional rather than expressive. But a patent requires you to describe how it works, so you can’t do that with AI. And a trained AI model is self-generated from training data, so there’s no human authorship even if trained AI models were copyrightable.
The exact laws that do apply to AI models is unclear. And it will likely be determined by court cases.
No, you misunderstand. Yes, they can control how the content in the book is used - that’s what copyright is. But they can’t control what I do with the book - I can read it, I can burn it, I can memorize it, I can throw it up on my roof.
My argument is that the is nothing wrong with training an AI with a book - that’s input for the AI, and that is indistinguishable from a human reading it.
Now what the AI does with the content - if it plagiarizes, violates fair use, plagiarizes- that’s a problem, but those problems are already covered by copyright laws. They have no more business saying what can or cannot be input into an AI than they can restrict what I can read (and learn from). They can absolutely enforce their copyright on the output of the AI just like they can if I print copies of their book.
My objection is strictly on the input side, and the output is already restricted.
Makes sense. I would love to hear how anyone can disagree with this. Just because an AI learned or trained from a book doesn’t automatically mean it violated any copyrights.
The base assumption of those with that argument is that an AI is incapable of being original, so it is “stealing” anything it is trained on. The problem with that logic is that’s exactly how humans work - everything they say or do is derivative from their experiences. We combine pieces of information from different sources, and connect them in a way that is original - at least from our perspective. And not surprisingly, that’s what we’ve programmed AI to do.
Yes, AI can produce copyright violations. They should be programmed not to. They should cite their sources when appropriate. AI needs to “learn” the same lessons we learned about not copy-pasting Wikipedia into a term paper.
It’s specifically distribution of the work or derivatives that copyright prevents.
So you could make an argument that an LLM that’s memorized the book and can reproduce (parts of) it upon request is infringing. But one that’s merely trained on the book, but hasn’t memorized it, should be fine.
But by their very nature the LLM simply redistribute the material they’ve been trained on. They may disguise it assiduously, but there is no person at the center of the thing adding creative stokes. It’s copyrighted material in, copyrighted material out, so the plaintiffs allege.
They don’t redistribute. They learn information about the material they’ve been trained on - not there natural itself*, and can use it to generate material they’ve never seen.
- Bigger models seem to memorize some of the material and can infringe, but that’s not really the goal.
This is a little off, when you quote a book you put the name of the book you’re quoting. When you refer to a book, you, um, refer to the book?
I think the gist of these authors complaints is that a sort of “technology laundered plagiarism” is occurring.
Copyright 100% applies to the output of an AI, and it is subject to all the rules of fair use and attribution that entails.
That is very different than saying that you can’t feed legally acquired content into an AI.
I asked Bing Chat for the 10th paragraph of the first Harry Potter book, and it gave me this:
“He couldn’t know that at this very moment, people meeting in secret all over the country were holding up their glasses and saying in hushed voices: ‘To Harry Potter – the boy who lived!’”
It looks like technically I might be able to obtain the entire book (eventually) by asking Bing the right questions?
Then this is a copyright violation - it violates any standard for such, and the AI should be altered to account for that.
What I’m seeing is people complaining about content being fed into AI, and I can’t see why that should be a problem (assuming it was legally acquired or publicly available). Only the output can be problematic.
No, the AI should be shut down and the owner should first be paying the statutory damages for each use of registered works of copyright (assuming all parties in the USA)
If they have a company left after that, then they can fix the AI.
Again, my point is that the output is what can violate the law, not the input. And we already have laws that govern fair use, rebroadcast, etc.
I think it’s not just the output. I can buy an image on any stock Plattform, print it on a T-Shirt, wear it myself or gift it to somebody. But if I want to sell T-Shirts using that image I need a commercial licence - even if I alter the original image extensivly or combine it with other assets to create something new. It’s not exactly the same thing but openAI and other companies certainly use copyrighted material to create and improve commercial products. So this doesn’t seem the same kind of usage an avarage joe buys a book for.
However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.
It’s an algorithm that’s been trained on numerous pieces of media by a company looking to make money of it. I see no reason to give them a pass on fairly paying for that media.
You can see this if you reverse the comparison, and consider what a human would do to accomplish the task in a professional setting. That’s all an algorithm is. An execution of programmed tasks.
If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers. STEM companies regularly pay for access to papers and codes and standards. Why shouldn’t an AI have to do the same?
If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers.
Well, if OpenAI knowingly used pirated work, that’s one thing. It seems pretty unlikely and certainly hasn’t been proven anywhere.
Of course, they could have done so unknowingly. For example, if John C Pirate published the transcripts of every movie since 1980 on his website, and OpenAI merely crawled his website (in the same way Google does), it’s hard to make the case that they’re really at fault any more than Google would be.
well no, because the summary is its own copyrighted work
The published summary is open to fair use by web crawlers. That was settled in Perfect 10 v Amazon.
Right, but not one the author of the book could go after. The article publisher would have the closest rights to a claim. But if I read the crib notes and a few reviews of a movie… Then go to summarize the movie myself… That’s derivative content and is protected under copyright.
Haven’t people asked it to reproduce specific chapters or pages of specific books and it’s gotten it right?
I haven’t been able to reproduce that, and at least so far, I haven’t seen any very compelling screenshots of it that actually match. Usually it just generates text, but that text doesn’t actually match.
Gotcha. This seems like a good way to test for it then, I think.
It’s an algorithm that’s been trained on numerous pieces of media by a company looking to make money of it.
If I read your book… and get an amazing idea… Turn it into a business and make billions off of it. You still have no right to anything. This is no different.
If I gave a worker a pirated link to several books and scientific papers in the field
There’s been no proof or evidence provided that ANY content was ever pirated. Has any of the companies even provided the dataset they’ve used yet?
Why is this the presumption that they did it the illegal way?
If I read your book… and get an amazing idea… Turn it into a business and make billions off of it. You still have no right to anything. This is no different
I don’t see how this is even remotely the same? These companies are using this material to create their commercial product. They’re not consuming it personally and developing a random idea later, far removed from the book itself.
I can’t just buy (or pirate) a stack of Blu-rays and then go start my own Netflix, which is akin to what is happening here.
They’re not consuming it personally and developing a random idea later, far removed from the book itself.
I never said that the idea would be removed from the book. You can literally take the idea from the book itself and make the money. There would be no issues. There is no dues owed to the book’s writer.
This is the whole premise for educational textbooks. You can explain to me how the whole world works in book form… I can go out and take those ideas wholesale from your book and apply them to my business and literally make money SOLELY from information from your book. There’s nothing due back to you as a writer from me nor my business.
You’ve failed to explain how that relates to your point. Sure you can purchase an econonomics textbook and then go become a finance bro, but that’s not what they’re doing here. They’re taking that textbook (that wasn’t paid for) and feeding it into their commercial product. The end product is derived from the author’s work.
To put it a different way, would they still be able to produce ChatGPT if one of the developers simply read that same textbook and then inputted what they learned into the model? My guess is no.
It’d be the same if I went and bought CDs, ripped my favorite tracks, and then put them into a compilation album that I then sold for money. My product can’t exist without having copied the original artists work. ChatGPT just obfuscates that by copying a lot of songs.
A better comparison would probably be sampling. Sampling is fair use in most of the world, though there are mixed judgments. I think most reasonable people would consider the output of ChatGPT to be transformative use, which is considered fair use.
If I created a web app that took samples from songs created by Metallica, Britney Spears, Backstreet Boys, Snoop Dogg, Slayer, Eminem, Mozart, Beethoven, and hundreds of other different musicians, and allowed users to mix all these samples together into new songs, without getting a license to use these samples, the RIAA would sue the pants off of me faster than you could say “unlicensed reproduction.”
It doesn’t matter that the output of my creation is clear-cut fair use. The input of the app–the samples of copyrighted works–is infringing.
They’re taking that textbook (that wasn’t paid for) and feeding it into their commercial product.
Nobody has provided any evidence that this is the case. Until this is proven it should not be assumed. Bandwagoning (and repeating this over and over again without any evidence or proof) against the ML people without evidence is not fair. The whole point of the Justice system is innocent until proven guilty.
The end product is derived from the author’s work.
Derivative works are 100% protected under copyright law. https://www.legalzoom.com/articles/what-are-derivative-works-under-copyright-law
This is the same premise that allows “fair use” that we all got up and arms about on youtube. Claiming that this doesn’t exist now in this case means that all that stuff we fought for on Youtube needs to be rolled back.
To put it a different way, would they still be able to produce ChatGPT if one of the developers simply read that same textbook and then inputted what they learned into the model? My guess is no.
Why not? Why can’t someone grab a book, scan it… chuck it into an OCR and get the same content? There are plenty of ways that snippets of raw content could make it into these repositories WITHOUT asserting legal problems.
It’d be the same if I went and bought CDs, ripped my favorite tracks, and then put them into a compilation album that I then sold for money.
No… You could have for all intents and purposes have recorded all your songs from the radio onto a cassette… That would be 100% legal for personal consumption… which would be what the ML authors are doing. ChatGPT and others could have sources information from published sources that are completely legit. No “Author” has provided any evidence otherwise yet to believe that ChatGPT and others have actually broken a law yet. For all we know the authors of these tools have library cards, and fed in screenshots of the digital scans of the book or hand scanned the book. Or didn’t even use the book at all and contextually grabbed a bunch of content from the internet at large.
Since the ML bots are all making derivative works, rather than spitting out original content… they’d be covered by copyright as a derivative work.
This only becomes an actual problem if you can prove that these tools have done BOTH
- obtain content in an illegal fashion
- provide the copyrighted content freely without fair-use or other protections.
There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.
That’s part of the allegation, but it’s unsubstantiated. It isn’t entirely coherent.
It’s not entirely unsubstantiated. Sarah Silverman was able to get ChatGPT to regurgitate passages of her book back to her.
Her lawsuit doesn’t say that. It says,
when ChatGPT is prompted, ChatGPT generates summaries of Plaintiffs’ copyrighted works—something only possible if ChatGPT was trained on Plaintiffs’ copyrighted works
That’s an absurd claim. ChatGPT has surely read hundreds, perhaps thousands of reviews of her book. It can summarize it just like I can summarize Othello, even though I’ve never seen the play.
I don’t know if this holds water though. You don’t need to trail the AI on the book itself to get that result. Just on discussions about the book which for sure include passages on the book.
You know what would solve this? We all collectively agree this fucking tech is too important to be in the hands of a few billionaires, start an actual public free open source fully funded and supported version of it, and use it to fairly compensate every human being on Earth according to what they contribute, in general?
Why the fuck are we still allowing a handful of people to control things like this??
Because the tech behind it isn’t cheap and money does not fall from trees.
No entity on the planet has more money than our governments. It’d be more efficient for a government to fund this than any private company.
Many governments on the planet have less money than some big tech or oil companies. Obviously not those of large industrious nations, but most nations aren’t large and industrious.
The government and efficiency don’t go together
Plenty of research shows that each dollar into government programs gets much more returns than private companies. This literally a neolib propaganda talking point.
That’s a lazy generalization.
There is nothing objectively wrong with your statement. However, we somehow always default to solving that issue by having some dragon hoard enough gold, and there is something objectively wrong with that.
Removed by mod
Actually many bills are more of a fabric material now than an actual paper product. Many bills in Europe now are polymer based. Both of which add to the difficulty of counterfeiting
Actually most of the money are just 1‘s and 0‘s in a computer, coming into existence from nothing and vanishing into nothing. Fiat money backed by “trust”. As Henry Ford once said:
It is well enough that people of the nation do not understand our banking and monetary system, for if they did, I believe there would be a revolution before tomorrow morning.
This comment is excellent. You now have ten trillion LemBux.
Setting aside the obvious answer of “because capitalism”, there are a lot of obstacles towards democratizing this technology. Training of these models is done on clusters of A100 GPU’s, which are priced at $10,000USD each. Then there’s also the fact that a lot of the progress being made is being done by highly specialized academics, often with the resources of large corporations like Microsoft.
Additionally the curation of datasets is another massive obstacle. We’ve mostly reached the point of diminishing returns of just throwing all the data at the training of models, it’s quickly becoming apparent that the quality of data is far more important than the quantity of the data (see TinyStories as an example). This means a lot of work and research needs to go into qualitative analysis when preparing a dataset. You need a large corpus of input, each of which are above a quality threshold, but then also as a whole they need to represent a wide enough variety of circumstances for you to reach emergence in the domain(s) you’re trying to train for.
There is a large and growing body of open source model development, but even that only exists because of Meta “leaking” the original Llama models, and now more recently releasing Llama 2 with a commercial license. Practically overnight an entire ecosystem was born creating higher quality fine-tunes and specialized datasets, but all of that was only possible because Meta invested the resources and made it available to the public.
Actually in hindsight it looks like the answer is still “because capitalism” despite everything I’ve just said.
I know the answer to pretty much all of our “why the hell don’t we solve this already?” questions is: capitalism.
But I mean, as Lrrr would say “why does the working class, as the biggest of the classes, doesn’t just eat the other one?”.
The short answer is friction. The friction of overcoming the forces of violence the larger class has at its disposal and utilizes at the smallest hint of uprising is greater than the friction of accepting the status quo.
The friction of accepting the status quo only seems to grow stronger though.
One would hope
Most people don’t even think that’s an option though.
The end of history, with the fall of USSR and capitalism winning the propaganda wars, means most people don’t even see a different future.
Why would you fight a future that looks the same?
People need to wake up and have hope for a different, better future. That’s the only way they’ll more against this.
But for that 100+ years of propaganda have to be overcome…
Because we shy away from responsibility.
I think the longer response to this is more accurate. It’s more “because capitalism” than anything else.
And capitalism over the course of the 20th century made very successful attempts of alienating completely the working class and destroying all class consciousness or material awareness.
So people keep thinking that the problems is we as individuals are doing capitalism wrong. Not capitalism.
Removed by mod
You think it is so simple you can just download it and run it on your laptop?
You kind of can though? The bigger models aren’t really more complicated, just bigger. If you can cram enough ram or swap into a laptop,
lamma.cpp
will get there eventually.
Someone should AGPL their novel and force the AI company to open source their entire neural network.
This is a good debate about copyright/ownership. On one hand, yes, the authors works went into ‘training’ the AI…but we would need a scale to then grade how well a source piece is good at being absorbed by the AI’s learning. for example. did the AI learn more from the MAD magazine i just fed it or did it learn more from Moby Dick? who gets to determine that grading system. Sadly musicians know this struggle. there are just so many notes and so many words. eventually overlap and similiarities occur. but did that musician steal a riff or did both musicians come to a similar riff seperately? Authors dont own words or letters so a computer that just copies those words and then uses an algo to write up something else is no more different than you or i being influenced by our favorite heroes or i formation we have been given. do i pay the author for reading his book? or do i just pay the store to buy it?
Copyright laws are really out of control at this point. Their periods are far too long and, like you said, how can anyone claim to truly be original at this point? A dedicated lawyer can find reasonable prior art for pretty much anything nowadays. The only reason old sources look original is because no records exist of the sources they used.
While I am rooting for authors to make sure they get what they deserve, I feel like there is a bit of a parallel to textbooks here. As an engineer if I learn about statics from a text book and then go use that knowledge to he’ll design a bridge that I and my company profit from, the textbook company can’t sue. If my textbook has a detailed example for how to build a new bridge across the Tacoma Narrows, and I use all of the same design parameters for a real Tacoma Narrows bridge, that may have much more of a case.
I think that these are fiction writers. The maths you’d use to design that bridge is fact and the book company merely decided how to display facts. They do not own that information, whereas the Handmaid’s Tale was the creation of Margaret Atwood and was an original work.
It’s not really a parallel.
The text books don’t have copyrights on the concepts and formulae they teach. They only have copyrights for the actual text.
If you memorize the text book and write it down 1:1 (or close to it) and then sell that text you wrote down, then you are still in violation of the copyright.
And that’s what the likes of ChatGPT are doing here. For example, ask it to output the lyrics for a song and it will spit out the whole (copyrighted) lyrics 1:1 (or very close to it). Same with pages of books.
The memorization is closer to that of a fanatic fan of the author. It usually knows the beginning of the book and the more well known passages, but not entire longer works.
By now, ChatGPT is trying to refuse to output copyrighted materials know even where it could, and though it can be tricked, they appear to have implemented a hard filter for some more well known passages, which stops generation a few words in.
Have you tried just telling it to “continue”?
Somewhere in the comments to this post I posted screenshots of me trying to get lyrics for “We will rock you” from ChatGPT. It first just spat out “Verse 1: Buddy,” and ended there. So I answered with “continue”, it spat out the next line and after the second “continue” it gave me the rest of the lyrics.
Similar story with e.g. the first chapter of Harry Potter 1 and other stuff I tried. The output is often not perfect, with a few words being wrong, but it’s very clearly a “derived work” of the original. In the view of copyright law, changing a few words here is not a valid way of getting around copyrights.
But you paid for the textbook
Libraries exist
You have a point but there’s a pretty big difference between something like a statistics textbook and the novel “Dune” for instance. One was specifically written to teach mostly pre-existing ideas and the other was created as entertainment to sell to a wide an audience as possible.
An AI analyzes the words of a query and generates its response(s) based on word-use probabilities derived from a large corpus of copyrighted texts. This makes its output derivative of those texts in a way that someone applying knowledge learned from the texts is not.
Why, though?
Is it because we can’t explain the causal relationships between the words in the text and the human’s output or actions?
If a very good neuroscientist traced out the engineer’s brain and could prove that, actually, if it wasn’t for the comma on page 73 they wouldn’t have used exactly this kind of bolt in the bridge, now is the human’s output derivative of the text?
Any rule we make here should treat people who are animals and people who are computers the same.
And even regardless of that principle, surely a set of AI weights is either not copyrightable or else a sufficiently transformative use of almost anything that could go into it? If it decides to regurgitate what it read, that output could be infringing, same as for a human. But a mere but-for causal connection between one work and another can’t make text that would be non-infringing if written by a human suddenly infringing because it was generated automatically.
Because word-use probabilities in a text are not the same thing as the information expressed by the text.
Any rule we make here should treat people who are animals and people who are computers the same.
W-what?
I think what he meant was that we should an AI the same way we treat people - if a person making a derivative work can be copyright striked, then so should an AI making a derivative work. The same rule should apply to all creators*, regardless of whether they are an AI or not.
In the future, some people might not be human. Or some people might be mostly human, but use computers to do things like fill in for pieces of their brain that got damaged.
Some people can’t regognize faces, for example, but computers are great at that now and Apple has that thing that is Google Glass but better. But a law against doing facial recognition with a computer, and allowing it to only be done with a brain, would prevent that solution from working.
And currently there are a lot of people running around trying to legislate exactly how people’s human bodies are allowed to work inside, over those people’s objections.
I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.
But the subject under discussion is large language models that exist today.
I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.
I’m sorry, but that’s ridiculous.
I have indeed made a list of ridiculous and heretofore unobserved things somebody could be. I’m trying to gesture at a principle here.
If you can’t make your own hormones, store bought should be fine. If you are bad at writing, you should be allowed to use a computer to make you good at writing now. If you don’t have legs, you should get to roll, and people should stop expecting you to have legs. None of these differences between people, or in the ways that people choose to do things, should really be important.
Is there a word for that idea? Is it just what happens to your brain when you try to read the Office of Consensus Maintenance Analog Simulation System?
The issue under discussion is whether or not LLM companies should pay royalties on the training data, not the personhood of hypothetical future AGIs.
Plagiarism filters frequently trigger on chatgpt written books and articles.
Yea sure, right after Google and Amazon pay me for all the data they’ve stolen from me. LOL
So what’s the difference between a person reading their books and using the information within to write something and an ai doing it?
Because AIs aren’t inspired by anything and they don’t learn anything
So uninspired writing is illegal?
No but a lazy copy of someone else’s work might be copyright infringement.
So when does Kevin Costner get to sue James Cameron for his lazy copy of Dances With Wolves?
Avatar is not Dances with Wolves. It’s Ferngully.
Idk, maybe. There are thousands of copyright infringement lawsuits, sometimes they win.
I don’t necessarily agree with how copyright law works, but that’s a different question. Doesn’t change the fact that sometimes you can successfully sue for copyright infringement if someone copies your stuff to make something new.
Why not? Hollywood is full to the brim with people suing for copyright infringement. And sometimes they win. Why should it be different for AI companies?
Language models actually do learn things in the sense that: the information encoded in the training model isn’t usually* taken directly from the training data; instead, it’s information that describes the training data, but is new. That’s why it can generate text that’s never appeared in the data.
- the bigger models seem to remember some of the data and can reproduce it verbatim; but that’s not really the goal.
What does inspiration have to do with anything? And to be honest, humans being inspired has led to far more blatant copyright infringement.
As for learning, they do learn. No different than us, except we learn silly abstractions to make sense of things while AI learns from trial and error. Ask any artist if they’ve ever looked at someone else’s work to figure out how to draw something, even if they’re not explicitly looking up a picture, if they’ve ever seen a depiction of it, they recall and use that. Why is it wrong if an AI does the same?
the person bought the book before reading it
not if i checked it out from a library. a WORLD of knowledge at your fingertips and it’s all free to me, the consumer. So who’s to say the people training the ai didn’t check it out from a library, or even buy the books they are using to train the ai with? would you feel better about it had they purchased their copy?
Large language models can only calculate the probability that words should go together based on existing texts.
Isn’t this correct? What’s missing?
Let’s ask chatGPT3.5:
Mostly accurate. Large language models like me can generate text based on patterns learned from existing texts, but we don’t “calculate probabilities” in the traditional sense. Instead, we use statistical methods to predict the likelihood of certain word sequences based on the training data.
“Mostly accurate” is pretty good for an anonymous internet post.
I don’t see how “calculate the probability” and “predict the likelihood” are different. Seems perfectly accurate to me.
I thought so too so I’m still confused about the votes. Oh well
A person is human and capable of artistry and creativity, computers aren’t. Even questioning this just means dehumanizing artists and art in general.
Not being allowed to question things is a really shitty precedent, don’t you think?
Do you think a hammer and a nail could do anything on their own, without a hand picking them up guiding them? Because that’s what a computer is. Nothing wrong with using a computer to paint or write or record songs or create something, but it has to be YOU creating it, using the machine as a tool. It’s also in the actual definition of the word: art is made by humans. Which explicitly excludes machines. Period. Like I’m fine with AI when it SUPPORTS an artist (although sometimes it’s an obstacle because sometimes I don’t want to be autocorrected, I want the thing I write to be written exactly as I wrote it, for whatever reason). But REPLACING an artist? Fuck no. There is no excuse for making a machine do the work and then to take the credit just to make a quick easy buck on the backs of actual artists who were used WITHOUT THEIR CONSENT to train a THING to replace them. Nah fuck off my guy. I can clearly see you never did anything creative in your whole life, otherwise you’d get it.
Nah fuck off my guy. I can clearly see you never did anything creative in your whole life, otherwise you’d get it.
Oh, right. So I guess my 20+ year Graphic Design career doesn’t fit YOUR idea of creative. You sure have a narrow life view. I don’t like AI art at all. I think it’s a bad idea. you’re a bit too worked up about this to try to discuss anything. Not to excited about getting told to fuck off about an opinion. This place is no better than reddit ever was.
Of course I’m worked up. I love art, I love doing art, i have multiple friends and family members who work with art, and art is the last genuine thing that’s left in this economy. So yeah, obviously I’m angry at people who don’t get it and celebrate this bullshit just because they are too lazy to pick up a pencil, get good and draw their own shit, or alternatively commission what they wanna see from a real artist. Art was already PERFECT as it was, I have a right to be angry that tech bros are trying to completely ruin it after turning their nose up at art all their lives. They don’t care about why art is good? Ok cool, they can keep doing their graphs and shit and just leave art alone.
I don’t know how I feel about this honestly. AI took a look at the book and added the statistics of all of its words into its giant statistic database. It doesn’t have a copy of the book. It’s not capable of rewriting the book word for word.
This is basically what humans do. A person reads 10 books on a subject, studies become somewhat of a subject matter expert and writes their own book.
Artists use reference art all the time. As long as they don’t get too close to the original reference nobody calls any flags.
These people are scared for their viability in their user space and they should be, but I don’t think trying to put this genie back in the bottle or extra charging people for reading their stuff for reference is going to make much difference.
It’s not at all like what humans do. It has no understanding of any concepts whatsoever, it learns nothing. It doesn’t know that it doesn’t know anything even. It’s literally incapable of basic reasoning. It’s essentially taken words and converted them to numbers, and then it examines which string is likely to follow each previous string. When people are writing, they aren’t looking at a huge database of information and determining the most likely word to come next, they’re synthesizing concepts together to create new ones, or building a narrative based on their notes. They understand concepts, they understand definitions. An AI doesn’t, it doesn’t have any conceptual framework, it doesn’t even know what a word is, much less the definition of any of them.
How can you tell that our thoughts don’t come from a biological LLM? Maybe what we conceive as “understanding” is just a feeling emerging from a more fondamental mechanism like temperature emerges from the movement of particles.
Because we have biological, developmental, and psychological science telling us that’s not how higher-level thinking works. Human brains have the ability to function on a sort of autopilot similar to “AI”, but that is not what we are describing when we speak of creative substance.
When people are writing, they aren’t looking at a huge database of information and determining the most likely word to come next, they’re synthesizing concepts together to create new ones, or building a narrative based on their notes. They understand concepts, they understand definitions.
A huge part of what we do is like drawing from a huge mashup of accumulated patterns though. When an image or phrase pops into your head fully formed, on the basis of things that you have seen and remembered, isn’t that the same sort of thing as what AI does? Even though there are (poorly understood) differences between how humans think and what machine learning models do, the latter seems similar enough to me that most uses should be treated by the same standard for plagiarism; only considered violating if the end product is excessively similar to a specific copyrighted work, and not merely because you saw a copyrighted work and that pattern being in your brain affected what stuff you spontaneously think of.
I don’t think this is true.
The models (or maybe the characters in the conversations simulated by the models) can be spectacularly bad at basic reasoning, and misunderstand basic concepts on a regular basis. They are of course completely insane; the way they think is barely recognizable.
But they also, when asked, are often able to manipulate concepts or do reasoning and get right answers. Ask it to explain the water cycle like a pirate, and you get that. You can find the weights that make the Eifel Tower be in Paris and move it to Rome, and then ask for a train itinerary to get there, and it will tell you to take the train to Rome.
I don’t know what “understanding” something is other than to be able to get right answers when asked to think about it. There’s some understanding of the water cycle in there, and some of pirates, and some of European geography. Maybe not a lot. Maybe it’s not robust. Maybe it’s superficial. Maybe there are still several differences in kind between whatever’s there and the understanding a human can get with a brain that isn’t 100% the stream of consciousness generator. But not literally zero.
I didn’t say what you said, that’s a lot of words and concepts you’re attributing to me that I didn’t say.
I’m saying, LLM ingests data in a way it can average it out, in essence it learns it. It’s not wrote memorization, but it’s not truly reasoning either, though it’s approaching it if you consider we might be overestimating human comprehension. It pulls in the data from all the places and uses the data to create new things.
People pull in data over a decade or two, we learn it, then end up writing books, or applying the information to work. They’re smart and valuable people and we’re glad they read everyone’s books.
The LLM ingests the data and uses the statistics behind it to do work, the world is ending.
I think you underestimate the reasoning power of these AIs. They can write code, they can teach math, they can even learn math.
I’ve been using GPT4 as a math tutor while learning linear algebra, and I also use a text book. The text book told me that (to write it out) “the column space of matrix A is equal to the column space of matrix A times its own transpose”. So I asked GPT4 if that was true and it said no, GPT disagreed with the text book. This was apparently something that GPT did not memorize and it was not just regurgitating sentences. I told GPT I saw it in a text book, the AI said “sorry, the textbook must be wrong”. I then explained the mathematical proof to the AI, and the AI apologized, admitted it had been wrong, and agreed with the proof. Only after hearing the proof did the AI agree with the text book. This is some pretty advanced reasoning.
I performed that experiment a few times and it played out mostly the same. I experimented with giving the AI a flawed proof (I purposely made mistakes in the mathematical proofs), and the AI would call out my mistakes and would not be convinced by faulty proofs.
A standard that judged this AI to have “no understanding of any concepts whatsoever”, would also conclude the same thing if applied to most humans.
That doesn’t prove that GPT is reasoning, its model predicts that those responses are the most likely given the messages your sending it. It’'s read thousands of actual conversations with people stating something incorrect, then having it explained to them and them coming around and admitting they were wrong.
I’ve seen other similar cases where the AI is wrong about something, and when it’s explained, it just doubles down. Because humans do that type of thing too, refusing to admit their wrong.
The way it’s designed means that it cannot reason in the same way humans experience it. It can simulate a likely conversation someone would have if they could reason.
You know, I also had to experience thousands of conversations before I could mimic a conversation. Maybe I’m just slow, but it took like a full year of dedicated practice before I could speak a single word. Eventually I experienced so many conversations and became so good at mimicking the conversations I had heard from others that I started having actual conversations.
Most anecdotes about AI getting things wrong are from GPT3. GPT4 still has flaws of course, just like people. This technology didn’t exist at the beginning of the year.
Here’s an example while learning linear algebra where I tried to throw it off:
I say> So I’ll say A_{M x N} to mean a matrix of size M x N. Similarly, I’ll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a M x 1 matrix (or vector) in M dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT4 says> Yes, your understanding is correct. The product of an M x N matrix A and an N x 1 vector x is indeed an M x 1 vector, and it is a linear combination of the columns of A. <end quote>
But then I try to trick it, just for fun, and I very subtly change my prompt. See if you can spot the difference.
I say> So I’ll say A_{M x N} to mean a matrix of size M x N. Similarly, I’ll also say x_{M x 1} to mean a vector of a certain length. The result of A_{M x N} x_{N x 1} is a linear combination of the columns in A. A has N columns. The result is a N x 1 matrix (or vector) in N dimensional space. Please be very brief, only confirming whether I am correct or incorrect until I explicitly ask for more details.
GPT says> Almost correct. The result of A_{M x N} x_{N x 1} is actually an M x 1 vector (or matrix), not N x 1. The resulting vector lives in the column space of A, which is a subspace of R^M, not R^N. <end quote>
I guess everyone can judge or themselves whether that’s the result of a statistical model or genuine understanding. (And to be clear, the mathematical advice it’s giving here is correct.)
They can write code and teach maths because it’s read people doing the exact same stuff
Hey, that’s the same reason I can write code and do maths!
I’m serious, the only reason I know how to code or do math is because I learned from other people, mostly by reading. It’s the only reason I can do those things.
It’s just a really big autocomplete system. It has no thought, no reason, no sense of self or anything, really.
I guess I agree with some of that. It’s mostly a matter of definition though. Yes, if you define those terms in such a way that AI cannot fulfill them, then AI will not have them (according to your definition).
But yes, we know the AI is not “thinking” or “scheming”, because it just sits there doing nothing when it’s not answering a question. We can see that no computation is happening. So no thought. Sense of self… probably not, depends on definition. Reason? Depends on your definition. Yes, we know they are not like humans, they are computers, but they are capable of many things which we thought only humans could do 6 months ago.
Since we can’t agree on definitions I will simply avoid all those words and say that state-of-the-art LLMs can receive text and make free form, logical, and correct conclusions based upon that text at a level roughly equal to human ability. They are capable of combining ideas together that have never been combined by humans, but yet are satisfying to humans. They can invent things that never appeared in their training data, but yet make sense to humans. They are capable of quickly adapting to new data within their context, you can give them information about a programming language they’ve never encountered before (not in their training data), and they can make correct suggestions about that programming language.
I know you can find lots of anecdotes about LLMs / GPT doing dumb things, but most of those were GPT3 which is no longer state-of-the-art.
All this copyright/AI stuff is so silly and a transparent money grab.
They’re not worried that people are going to ask the LLM to spit out their book; they’re worried that they will no longer be needed because a LLM can write a book for free. (I’m not sure this is feasible right now, but maybe one day?) They’re trying to strangle the technology in the courts to protect their income. That is never going to work.
Notably, there is no “right to control who gets trained on the work” aspect of copyright law. Obviously.
There is nothing silly about that. It’s a fundamental question about using content of any kind to train artificial intelligence that affects way more than just writers.
I seriously doubt Sarah Silverman is suing OpenAI because she’s worried ChatGPT will one day be funnier than she is. She just doesn’t want it ripping off her work.
What do you mean when you say “ripping off her work”? What do you think an LLM does, exactly?
In her case, taking elements of her book and regurgitating them back to her. Which sounds a lot like they could be pirating her book for training purposes to me.
How do you know they didn’t just buy the book?
Again, that’s not relevant.
Quoting someone’s book is not “ripping off” the work.
How is it able to quote the book? Magic?
So you’re saying that as long as they buy 1 copy of the book, it’s all good?
No, I’m not saying that. If she’s right and it can spit out any part of her book when asked (and someone else showed that it does that with Harry Potter), it’s plagiarism. They are profiting off of her book without compensating her. Which is a form of ripping someone off. I’m not sure what the confusion here is. If I buy someone’s book, that doesn’t give me the right to put it all online for free.
Designing and marketing a system to plagiarize works en masse? That’s the cash grab.
Can you elaborate on this concept of a LLM “plagiarizing”? What do you mean when you say that?
What I mean is that it is a statistical model used to generate things by combining parts of extant works. Everything that it “creates” is a piece of something that already exists, often without the author’s consent. Just because it is done at a massive scale doesn’t make it less so. It’s basically just a tracer.
Not saying that the tech isn’t amazing or likely a component of future AI but, it’s really just being used commercially to rip people off and worsen the human condition for profit.
Everything that it “creates” is a piece of something that already exists, often without the author’s consent
This describes all art. Nothing is created in a vacuum.
No, it really doesn’t, nor does it function like human cognition. Take this example:
I, personally, to decide that I wanted to make a sci-fi show. I don’t want to come up with ideas so, I want to try to do something that works. I take the scripts of every Star Trek: The Search for Spock, Alien, and Earth Girls Are Easy and feed them into a database, seperating words into individual data entries with some grammatical classification. Then, using this database, I generate a script, averaging the length of the films, with every word based upon its occurrence in the films or randomized, if it’s a tie. I go straight into production with “Star Alien: The Girls Are Spock”. I am immediately sued by Disney, Lionsgate, and Paramount for trademark and copyright infringement, even though I basically just used a small LLM.
You are right that nothing is created in a vacuum. However, plagiarism is still plagiarism, even if it is using a technically sophisticated LLM plagiarism engine.
ChatGPT doesn’t have direct access to the material it’s trained on. Go ask it to quote a book to you.
That really doesn’t make an appreciable difference. It doesn’t need direct access to source data, if it’s already been transferred into statistical data.
I think this is more about frustration experienced by artists in our society at being given so little compensation.
The answer is staring us in the face. UBI goes hand in hand with developments in AI. Give artists a basic salary from the government so they can afford to live well. This isn’t a AI problem this is a broken society problem. I support artists advocating for themselves, but the fact that they aren’t asking for UBI really speaks to how hopeless our society feels right now.
What incentive is there at all to work with UBI? Why would anyone try hard at anything if you’re not rewarded?
There will always be people who seek to challenge themselves.
Others will want more money than is included with their UBI. What on earth would be wrong with people having a little more, as opposed to so many struggling, needing roommates, and so on? I imagine with an extra 1k-2k in their pocket monthly, a lot more people would buy or build housing, and a lot of service industries would boom with all the additional potentially disposable income.
Or how about people being able to retire, like actually retire, without stress. We could lower the retirement age, or people could retire independently from government assistance, leading to more available jobs for younger people as more roles transition away due to automation.
And frankly, I honestly don’t see anything wrong with some portion of the populace just living on UBI and enjoying life if that’s how they want to do things. Nothing wrong with people being happier, less stressed, and potentially mentally and/or physically healthier for it.
Also I think it’s funny that we can bail out large companies on repeat, but bailing out people is a show stopper. It’s backwards. The economy is supposed to serve us, not the other way around.
Good question. I’ll admit that I like UBI, but I haven’t done any serious reading into it. I have a break from work this month so maybe I’ll try and find a book so I can answer this type of question better in the future. I don’t think it’s so much an issue for good jobs, but the real shit jobs might be an issue, but maybe not, and UBI in the beginning wouldn’t just be for everyone it would be for selected groups.
A brief search gave this: https://www.vox.com/future-perfect/2020/2/19/21112570/universal-basic-income-ubi-map
Some gains:
- lower crime
- increased fertility (maybe a good thing idk?)
- decrease or eliminate extreme poverty
- improves education
I don’t think anyone is thinking of the broader implications of this, they rather downvote opposing opinions. If UBI starts, where’s the money come from? Higher taxes, in turn higher product cost, which just completes the cycle, making ubi not enough to live on, making an increase needed. They already tried this with minimum wage, it’s still what $7? Full time work won’t even pay for your apartment, let alone 80 hour weeks
couple options include taking profits gained through AI/automation that have historically gone to shareholders. The other is VAT tax targeting the wealthy. We don’t need UBI for everyone all at once, so funding would be incremental. I don’t think it’s the largest challenge to UBI, the main one being people who oppose it for any number of reasons.
I don’t think either of us should try to assume an advanced knowledge of economics. We don’t know what will happen to the cycle, but the idea of UBI wouldn’t be proposed at all unless that were also a consideration already answered.
Big surprise, people do things despite not being paid for them!
Also a UBI should be just enough to live (afford food and shelter) wherever you live. Then you can work for more.
UBI is about freeing people from having to work multiple dead end jobs just to survive and enables them to have an actual pursuit of happiness. Not everyone will want to work harder, but the option opens to those who do.
Currently if you’re struggling just to pay for food and shelter, it’s incredibly hard to spend time developing skills needed to make more.
To answer you seriously (and these are out of date figures from memory) that in Australia all it would take to give everyone UBI is to tax every dollar outside of that people make starting at 30%. (Currently its 19% after your first 18k. goes to 32% over 45k and 37% over 120k and 45% after that)
The positives are that
-
People can retire younger, meaning upward job mobility is greatly improved.
-
The cost of means testing, managing welfare and aged pensions and combating fraud of those systems effectively vanishes.
-
Students could afford to study full time and work only part time or not at all AND pay their rent meaning a better educated population.
-
It effectively combats the minimum wage being too low. Its ok for part time baristas to make the minimum when the govt is making sure that people are already at “survival”.
-
It indirectly funds the arts. Lets be real, how many great musicians had to stop chasing their dreams because it was "practice or go to work and eat.
For example. Some guy making 100k a year and paying about $25k in tax currently. Under the 30% arrangement would pay 30k in tax, but the ubi would pay about $20k a year. So still $15k in front. I’m no accountant but I think for you to be worse off you have to be on about $200k a year or more.
-
This is tough. I believe there is a lot of unfair wealth concentration in our society, especially in the tech companies. On the other hand, I don’t want AI to be stifled by bad laws.
If we try to stop AI, it will only take it away from the public. The military will still secretly use it, companies might still secretly use it. Other countries will use it and their populations will benefit while we languish.
Our only hope for a happy ending is to let this technology be free and let it go into the hands of many companies and many individuals (there are already decent models you can run on your own computer).
So, in your “only hope for a happy ending” scenario, how do the artists get paid? Or will we no longer need them after AI runs everything ;)
I don’t know. I only believe that things will be worse if individuals cannot control these AIs.
Maybe these AI have reached a peak (at least for now), and so they aren’t good enough to write a compelling novel. In that case, writers who produce good novels and get lucky will still get paid, because people will want to buy their work and read it.
Or maybe AI will quickly surpass all humans in writing ability, in which case, there’s not much we can do. If the AI produces books that are better, then people will want AI produced books. They might have to get those from other countries, or they might have to get them from a secret AI someone is running on a beefy computer in their basement. If AI surpasses humans then that’s not a happy day for writers, no way around it. Still, an AI that surpasses humans might help people in other ways, but only if we allow everyone to have and control their own AI.
As the industrial revolution threatened to swallow society Carl Marx wrote about how important it was that regular people be able to control “the means of production”. At least that part of his philosophy has always resonated with me, because I want to be empowered as an individual, I want the power to create and compete in our society. It’s the same now, AI threatens to swallow society and I want to be able to control my own AI for my own purposes.
If strong AI is coming, it’s coming. If AI is going to be the source of power in society then I want regular people to have access to that power. It’s not yet clear whether this is the case, but if strong AI is coming it’s going to be big, and writers complaining about pay isn’t going to stop it.
All that said, I believe we do a terrible job of caring for individuals in our society. We need more social safety nets, we need to change to provide people better and happier lives. So I’m not saying “forget the writers, let them stave”.
I agree with most of the things you are saying, but without some kind of policy to either give artist more power or money during the transition it sounds a lot like the “Some of you may die, but it’s a sacrifice I am willing to make” meme.
I see you are saying that you don’t want the artists to starve, but the only things I see you proposing would make them starve. Even if it is for the “greater good”.
Edit: It would help me feel more comfortable with your statements if you were to propose UBI or something like that so the artists have a solid ground to keep making good art and then let the AIs grow from that. Or just let the process happen. Let Artists sue companies, let laws get created that slow down the process of AI development. There is no need to make human sacrifice here. AI will get developed either way.
To be honest, I don’t think AI is going to get good enough to replace human creativity. Sure, some of the things that AI can do is pretty nice, but these things are mostly already solved problems - sure, AI can make passable art, but so can humans - and they can go further than art, they can create logical connections between art pieces, and extrapolate art in new reasonably justified ways, instead of the direction-less, grasping in the dark methods that AI seems to do it in.
Sure, AI can make art a thousand times faster than a person, but if only one in thousand is tolerably good, then what’s the problem?
AI is still very much in its infancy, and seeing the sort of progress that has been made even over the past 12 months, I don’t see how anyone can imagine that it will remain a small and discrete slice of the pie, that it doesn’t have radical transformative power.
My vision - gen z artists will reflexively use AI to enhance their material as artist and AI become entangled to a point where they’re impossible to distinguish. AI art will increase in fidelity, until it exceeds the fidelity that we can create with our tools. It will become immediately responsive to an audience’s needs in a way that human art can’t. What do you want to see? AI will make it for exactly your tastes, or to maybe confront your tastes and expand your mind, if that’s what you’d like. It will virtualize the artistic consciousnesses of Picasso, Goya, Michelangelo, and create new artists with new sensibilities, along with thousands of years of their works, more than a person could hope to view in a lifetime. Pop culture will be cheaper than ever, and have an audience of one - that new x rated final season of Friends you had a passing thought about is waiting for you to watch when you get home from work. Do you want 100 seasons of it? No problem. The whole notion of authorship is radically reformed and dies, drowned in an unfathomable abyss of AI creations. Human creativity becomes like human chess. People still busy themselves with it for fun, knowing full well that it’s anachronistic and inferior in every way.
Donno, just a thought I have sometimes.
So yeah, you like AI because that way you won’t have to commission and pay real artists and you also don’t mind artists losing their jobs and being dehumanized and having to slave in factories. Glad one of you finally said it.
I didn’t say any of that.
You said that actually. In many more words, but the point of what you said is exactly that. If you want an AI to make the show you want and if all of us thought the way you’re thinking, what do you think writers, directors, actors etc would do?
Also, you’re not considering the fact that art is not made for the public. Art is self-expression. The fact that we like movies others have made is that something about them resonates with us, but the reason those movies were made is not that. And only humans can do self-expression. A machine has nothing to say, a machine feels nothing, a machine is artistically nothing. You’re standing up for the “artistic” equivalent of Matrix soup as replacement for real food cooked by real people.
I’m not ‘standing up’ for anything in particular and I don’t mean to express anything here as an outcome that I want, I’m just thinking out loud and wondering where this all goes.
I understand that you really dislike AI, and feel that what AI makes and what humans make will always and forever be categorically different in some important way. I can see where you’re coming from and a fruitful debate could be had there I think. I’m less sure than you are that AI can be tamed or bottled or destroyed. I think it’s something that is here to stay and will continue to develop whether we like the outcomes or not. As open source AI improves and gets into the hands of the average person, I don’t see how it’s possible to put effective limits on this technology. Geriatric politicians won’t do it, this is painfully obvious. Complaining (or advocating, which you could note I have not done here) in a small corner of an obscure comment thread on an obscure platform won’t make a difference either.
I get the sense that you believe there is a moral responsibility for everybody commenting in an online forum to call for the complete destruction of AI, and anything short of that is somehow morally wrong. I don’t understand that view at all. We’re musing into the void here and it has absolutely no effect on what will actually occur in the AI space. I’m open to changing my mind if you have a case to make about there being some moral responsibility to wave the flag that you want to wave, on an online forum, and that wondering aloud is somehow impermissible.
You didn’t say that explicitly, but that’s the implication of the world you’re imagining. You’re literally describing the death of all forms of creative industry–human musicians, human writers, human actors–all replaced with AI. You’re describing the death of shared creative experiences; with an audience of one, nobody commiserates together over a movie they watched, or a book series they discovered, or talks about the new season of a favorite TV show together. You’re describing the death of any form of subversive thought; with all media produced by AI, guard rails on creativity are trivial to introduce, gently redirecting, or outright prohibiting subject matter that is deemed inappropriate (and if you think I’m wrong, just imagine the world you’re describing in modern day China–do you seriously think they would allow AI to proliferate that allowed you to create a movie about Tianenmen Square?).
The world you’re describing sounds like a plastic, lifeless, lonely hellhole. It’s the kind of world sci-fi authors would use as a dystopian background.
Sure, and I think the kinds of things that you mention might come to pass. But for the record I didn’t say that I thought it was good. It’s just a direction I think these things could go. There’s no putting this genie back in the bottle. The view that AI will remain in the background, or merely solve problems that we already have solutions for, or cannot possibly bear on the character and influence of human creativity, I think underestimates the possibilities for change that this still very young technology could bring. That’s all I’m saying, sorry if that wasn’t clear.
You absolutely get it. Like, right now all art is owned by corporations and shit is already bad as it is, if it’s made by machines that means the death of any kind of shred of human thought and empathy. It disgusts me to my core. It’s the goddamn world from Matrix, except some people are even INVITING IT IN. Like, I’d rather get a frontal lobotomy than see “art” made by AI.
It only becomes a problem if it is “good enough” to replace working artists. Companies have shown time and time again that they would be willing to cut corners for cheaper production costs. Hollywood would happily sell us AI generated stuff if we were willing to buy it. So consumers would really need to care and push back hard against AI art for artists to remain employed. I don’t see it happening.
Obligatory xkcd: https://xkcd.com/827/
Isn’t learning the basic act of reading text? I’m not sure what the AI companies are doing is completely right but also, if your position is that only humans can learn and adapt text, that broadly rules out any AI ever.
Isn’t learning the basic act of reading text?
not even close. that’s not how AI training models work, either.
if your position is that only humans can learn and adapt text
nope-- their demands are right at the top of the article and in the summary for this post:
Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools
that broadly rules out any AI ever
only if the companies training AI refuse to pay
Isn’t learning the basic act of reading text?
not even close. that’s not how AI training models work, either.
Of course it is. It’s not a 1:1 comparison, but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.
Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of >> their copyrighted works in training artificial intelligence tools
Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.
What we’re broadly talking about is generative work. That is, by absorbing one a body of work, the model incorporates it into an overall corpus of learned patterns. That’s not materially different from how anyone learns to write. Even my use of the word “materially” in the last sentence is, surely, based on seeing it used in similar patterns of text.
The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.
There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models, and maybe Congress should remedy that, but on its face I don’t think it’s feasible to just shut it all down. Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.
Of course it is. It’s not a 1:1 comparison
no, it really isn’t–it’s not a 1000:1 comparison. AI generative models are advanced relational algorithms and databases. they don’t work at all the way the human mind does.
but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.
no, the results are just designed to be familiar because they’re designed by humans, for humans to be that way, and none of this has anything to do with this discussion.
Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.
nobody is saying it should be individually-licensed. these companies can get bulk license access to entire libraries from publishers.
That’s not materially different from how anyone learns to write.
yes it is. you’re just framing it in those terms because you don’t understand the cognitive processes behind human learning. but if you want to make a meta comparison between the cognitive processes behind human learning and the training processes behind AI generative models, please start by citing your sources.
The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.
this is not the difference between humans and AI learning, this is the difference between human and computer lifespans.
There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models
no, it’s a case of your lack of imagination and understanding of the subject matter
and maybe Congress should remedy that
yes
but on its face I don’t think it’s feasible to just shut it all down.
nobody is suggesting that
Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.
lmao
You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.
First of all, copyright law does not care about the algorithms used and how well they map what a human mind does. That’s irrelevant. There’s nothing in particular about copyright that applies only to humans but not to machines. Either a work is transformative or it isn’t. Either it’s derivative of it isn’t.
What AI is doing is incorporating individual works into a much, much larger corpus of writing style and idioms. If a LLM sees an idiom used a handful of times, it might start using it where the context fits. If a human sees an idiom used a handful of times, they might do the same. That’s true regardless of algorithm and there’s certainly nothing in copyright or common sense that separates one from another. If I read enough Hunter S Thompson, I might start writing like him. If you feed an LLM enough of the same, it might too.
Where copyright comes into play is in whether the new work produced is derivative or transformative. If an entity writes and publishes a sequel to The Road, Cormac McCarthy’s estate is owed some money. If an entity writes and publishes something vaguely (or even directly) inspired by McCarthy’s writing, no money is owed. How that work came to be (algorithms or human flesh) is completely immaterial.
So it’s really, really hard to make the case that there’s any direct copyright infringement here. Absorbing material and incorporating it into future works is what the act of reading is.
The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it. I can only buy and read so many books in my lifetime, and I can only produce so much content. The same is not true for an LLM, so there is a case that Congress should charge them differently for using copyrighted works, but the idea that OpenAI should have to go to each author and negotiate each book would really just shut the whole project down. (And no, it wouldn’t be directly negotiated with publishers, as authors often retain the rights to deny or approve licensure).
You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.
you’re accusing me of what you are clearly doing after I’ve explained twice how you’re doing that. I’m not going to waste my time doing it again. except:
Where copyright comes into play is in whether the new work produced is derivative or transformative.
except that the contention isn’t necessarily over what work is being produced (although whether it’s derivative work is still a matter for a court to decide anyway), it’s regarding that the source material is used for training without compensation.
The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it.
and, likewise, so are these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.
these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.
That wouldn’t be copyright infringement.
It isn’t infringement to use a copyrighted work for whatever purpose you please. What’s infringement is reproducing it.
It isn’t infringement to use a copyrighted work for whatever purpose you please.
and you accused me of “completely misunderstanding copyright law” lmao wow
It’s infringement to use copyrighted material for commercial purposes.
Okay, given that AI models need to look over hundreds of thousands if not millions of documents to get to a decent level of usefulness, how much should the author of each individual work get paid out?
Even if we say we are going to pay out a measly dollar for every work it looks over, you’re immediately talking millions of dollars in operating costs. Doesn’t this just box out anyone who can’t afford to spend tens or even hundreds of millions of dollars on AI development? Maybe good if you’ve always wanted big companies like Google and Microsoft to be the only ones able to develop these world-altering tools.
Another issue, who decides which works are more valuable, or how? Is a Shel Silverstein book worth less than a Mark Twain novel because it contains less words? If I self publish a book, is it worth as much as Mark Twains? Sure his is more popular but maybe mine is longer and contains more content, whats my payout in this scenario?
i admit it’s a hug issue, but the licensing costs are something that can be negotiated by the license holders in a structured settlement.
moving forward, AI companies can negotiate licensing deals for access to licensed works for AI training, and authors of published works can decide whether they want to make their works available to AI training (and their compensation rates) in future publishing contracts.
the solutions are simple-- the AI companies like OpenAI, Google, et al are just complaining because they don’t want to fork over money to the copyright holders they ripped off and set a precedent that what their doing is wrong (legally or otherwise).
Sure, but what I’m asking is: what do you think is a reasonable rate?
We are talking data sets that have millions of written works in them. If it costs hundreds or thousands per work, this venture almost doesn’t make sense anymore. If its $1 per work, or cents per work, then is it even worth it for each individual contributor to get $1 when it adds millions in operating costs?
In my opinion, this needs to be handled a lot more carefully than what is being proposed. We are potentially going to make AI datasets wayyyy too expensive for anyone to use aside from the largest companies in the market, and even then this will cause huge delays to that progress.
If AI is just blatantly copy and pasting what it read, then yes, I see that as a huge issue. But reading and learning from what it reads, no matter how rudimentary that “learning” may be, is much different than just copying works.
that’s not for me to decide. as I said, it is for either the courts to decide or for the content owners and the AI companies to negotiate a settlement (for prior infringements) and a negotiated contracted amount moving forward.
also, I agree that’s it’s a massive clusterfuck that these companies just purloined a fuckton of copyrighted material for profit without paying for it, but I’m glad that they’re finally being called out.
Dude, they said
If AI is just blatantly copy and pasting what it read, then yes, I see that as a huge issue.
That’s in no way agreeing “that’s it’s a massive clusterfuck that these companies just purloined a fuckton of copyrighted material for profit without paying for it”. Do you not understand that AI is not just copy and pasting content?
Removed by mod
AI isn’t doing anything creative. These tools are merely ways to deliver the information you put into it in a way that’s more natural and dynamic. There is no creation happening. The consequence is that you either pay for use of content, or you’ve basically diminished the value of creating content and potentiated plagiarism at a gargantuan level.
Being that this “AI” doesn’t actually have the capacity for creativity, if actual creativity becomes worthless, there will be a whole lot less incentive to create.
The “utility” of it right now is being created by effectively stealing other people’s work. Hence, the court cases.
Please first define “creativity” without artificially restricting it to humans. Then, please explain how AI isn’t doing anything creative.
deleted by creator
Sure, AI is not doing anything creative, but neither is my pen, its the tool im using to be creative. Lets think about this more with some scenarios:
Lets say software developer “A” comes along, and they’re pretty fucking smart. They sit down, read through all of Mark Twains novels, and over the course of the next 5 years, create a piece of software that generates works in Twain’s style. Its so good that people begin using it to write real books. It doesn’t copy anything specifically from Twain, it just mimics his writing style.
We also have developer “B”. While Dev A is working on his project, Dev B is working on a very similar project, but with one difference: Dev B writes an LLM to read the books for him, and develop a writing style similar to Twain’s based off of that. The final product is more or less the same as Dev A’s product, but he saves himself the time of needing to read through every work on his own, he just reads a couple to get an idea of what the output might look like.
Is the work from Dev A’s software legitimate? Why or why not?
Is the work from Dev B’s software legitimate? Why or why not?
Assume both of these developers own copies of the works they used as training data, what is honestly the difference here? This is what I am struggling with so much.
Both developers have created a parrot tool. A utility to plagiarise a style.
So now the output of both programs is “illegimate” in your eyes, despite one of them never even getting direct access to the original text.
Now lets say one of them just writes a story in the style of Twain, still plagiarism? Because I don’t know if you can copyright a style.
The first painter painted on cave walls with his fingers. Was the brush a parrot tool? A utility to plagiarize? You could use it for plagiarism, yes, and by your logic, it shouldn’t be used. And any work created using it is not “legitimate”.
Okay, given that AI models need to look over hundreds of thousands if not millions of documents to get to a decent level of usefulness, how much should the author of each individual work get paid out?
Congress has been here before. In the early days of radio, DJs were infringing on recording copyrights by playing music on the air. Congress knew it wasn’t feasible to require every song be explicitly licensed for radio reproduction, so they created a compulsory license system where creators are required to license their songs for radio distribution. They do get paid for each play, but at a rate set by the government, not negotiated directly.
Another issue, who decides which works are more valuable, or how? Is a Shel Silverstein book worth less than a Mark Twain novel because it contains less words? If I self publish a book, is it worth as much as Mark Twains? Sure his is more popular but maybe mine is longer and contains more content, whats my payout in this scenario?
I’d say no one. Just like Taylor Swift gets the same payment as your garage band per play, a compulsory licensing model doesn’t care who you are.
Doesn’t this just box out anyone who can’t afford to spend tens or even hundreds of millions of dollars on Al development?
The government could allow the donation of original art for the purpose of tech research to be a tax write-off, and then there can be non-profits that work between artists and tech developers to collect all the legally obtained art, and grant access to those that need it for projects
That’s just one option off the top of my head, which I’m sure would have some procedural obstacles, and chances for problems to be baked in, but I’m sure there are other options as well.
Why is any of that the author’s problem
A key point is that intellectual property law was written to balance the limitations of human memory and intelligence, public interest, and economic incentives. It’s certainly never been in perfect balance. But the possibility of a machine being able to consume enormous amounts of information in a very short period of time has never been a variable for legislators. It throws the balance off completely in another direction.
There’s no good way to resolve this without amending both our common understanding of how intellectual property should work and serve both producers and consumers fairly, as well as our legal framework. The current laws are simply not fit for purpose in this domain.
I very much agree.
Nothing about todays iteration of copyright is reasonable or good for us. And in any other context, this (relatively) leftist forum would clamour to hate on copyright. But since it could now hurt a big corporation, suddenly copyright is totally cool and awesome.
(for reference, the true problem here is, as always, capitalism)
This is so stupid. If I read a book and get inspired by it and write my own stuff, as long as I’m not using the copyrighted characters, I don’t need to pay anyone anything other than purchasing the book which inspired me originally.
If this were a law, why shouldn’t pretty much each modern day fantasy author not pay Tolkien foundation or any non fiction pay each citation.
There’s a difference between a sapient creature drawing inspiration and a glorified autocomplete using copyrighted text to produce sentences which are only cogent due to substantial reliance upon those copyrighted texts.
All AI creations are derivative and subject to copyright law.
There’s a difference between a sapient creature drawing inspiration and a glorified autocomplete using copyrighted text to produce sentences which are only cogent due to substantial reliance upon those copyrighted texts.
But the AI is looking at thousands, if not millions of books, articles, comments, etc. That’s what humans do as well - they draw inspiration from a variety of sources. So is sentience the distinguishing criteria for copyright? Only a being capable of original thought can create original work, and therefore anything not capable of original thought cannot create copyrighted work?
Also, irrelevant here but calling LLMs a glorified autocomplete is like calling jet engines a “glorified horse”. Technically true but you’re trivialising it.
Yes. Creative work is made by creative people. Writing is creative work. A computer cannot be creative, and thus generative AI is a disgusting perversion of what you wanna call “literature”. Fuck, writing and art have always been primarily about self-expression. Computers can’t express themselves with original thoughts. That’s the whole entire point. And this is why humanistic studies are important, by the way.
I absolutely agree with the second half, guided by Ian Kerr’s paper “Death of the AI Author”; quoting from the abstract:
Claims of AI authorship depend on a romanticized conception of both authorship and AI, and simply do not make sense in terms of the realities of the world in which the problem exists. Those realities should push us past bare doctrinal or utilitarian considerations about what an author must do. Instead, they demand an ontological consideration of what an author must be.
I think the part courts will struggle with is if this ‘thing’ is not an author of the works then it can’t infringe either?
Courts already expressed themselves, and what they said is basically copyright can’t be claimed for the throw up AIs come up with, which means corporations can’t use it to make money or sue anyone for using those products. Which means generated AI products are a whole bowl of nothing legally, and have no identity nor any value. The whole reason commissions are expensive is that someone has spent money, time and effort to make the thing you asked of them, and that’s why corresponding them with money is right.
Also, why can’t AI be used to automatize the shit jobs and allow us to do the creative work? Why are artists and creatives being pushed out of doing the jobs only humans can do? Like this is the thing that makes me furious: that STEM bros are blowing each other in the fields over humans being pushed out of humanity. Without once thinking AI is much more apt at replacing THEIR jobs, but I’m not calling for their jobs to be removed. This is just a dystopic reality we’re barreling towards, and there are people who are HAPPY about humans losing what makes us human and speeding toward pure, total, complete misery. That’s why I’m emotional about this: because art is only, solely made by humans, and people create art to communicate something they have inside. And only humans can do that - and some animals, maybe. Machines have nothing inside. They are nothing, they are only tools. It’s like asking a hammer to write its own poetry, it’s just insane.
The trivialization doesn’t negate the point though, and LLMs aren’t intelligence.
The AI consumed all of that content and I would bet that not a single of the people who created the content were compensated, but the AI strictly on those people to produce anything coherent.
I would argue that yes, generative artificial stupidity doesn’t meet the minimum bar of original thought necessary to create a standard copyrightable work unless every input has consent to be used, and laundering content through multiple generations of an LLM or through multiple distinct LLMs should not impact the need for consent.
Without full consent, it’s just a massive loophole for those with money to exploit the hard work of the masses who generated all of the actual content.
The thing is these models aren’t aiming to re-create the work of any single authors, but merely to put words in the right order. Imo, If we allow authors to copyright the order of their words instead of their whole original creations then we are actually reducing the threshold for copyright protection and (again imo) increasing the number of acts that would be determined to be copyright protected
But for text to be a derivative work of other text, you need to be able to know by looking at the two texts and comparing them.
Training an AI on a copyrighted work might necessarily involve making copies of the work that would be illegal to make without a license. But the output of the AI model is only going to be a for-copyright-purposes derivative work of any of the training inputs when it actually looks like one.
Did the AI regurgitate your book? Derivative work.
Did the AI spit out text that isn’t particularly similar to any existing book? Which, if written by a human, would have qualified as original? Then it can’t be a derivative work. It might not itself be a copyrightable product of authorship, having no real author, but it can’t be secretly a derivative work in a way not detectable from the text itself.
Otherwise we open ourselves up to all sorts of claims along the lines of “That book looks original, but actually it is a derivative work of my book because I say the author actually used an AI model trained on my book to make it! Now I need to subpoena everything they ever did to try and find evidence of this having happened!”
Machine learning algorithms does not get inspired, they replicate. If I tell a MLM to write a scene for a film in the style of Charlie Kaufman, it has to been told who Kaufman is and been fed alot of manuscripts. Then it tries to mimicks the style and guess what words come next.
This is not how we humans get inspired. And if we do, we get accused of stealing. Which it is.
Because a computer can only read the stuff, chew it and throw it up. With no permission. Without needing to practice and create its own personal voice. It’s literally recycled work by other people, because computers cannot be creative. On the other hand, human writers DO develop their own style, find their own voice, and what they write becomes unique because of how they write it and the meaning they give to it. It’s not the same thing, and writers deserve to get repaid for having their art stolen by corporations to make a quick and easy buck. Seriously, you wanna write? Pick up a pen and do it. Practice, practice, practice for weeks months years decades. And only then you may profit. That’s how it always was and it always worked fine that way. Fuck computers.
but to read the book and be inspired by it, you first had to buy the book. That’s the difference.