While I am glad this ruling went this way, why’d she have diss Data to make it?
To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called “Schisms.” StarTrek.com posted the full poem, but here’s a taste:
"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.
I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."
Data “might be worse than ChatGPT at writing poetry,” but his “intelligence is comparable to that of a human being,” Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.
I think Data would be smart enough to realize that copyright is Ferengi BS and wouldn’t want to copyright his works
The title makes it sound like the judge put Data and the AI on the same side of the comparison. The judge was specifically saying that, unlike in the fictional Federation setting, where Data was proven to be alive, this AI is much more like the metaphorical toaster that characters like Data and Robert Picardo’s Doctor on Voyager get compared to. It is not alive, it does not create, it is just a tool that follows instructions.
Somewhere around here I have an old (1970’s Dartmouth dialect old) BASIC programming book that includes a type-in program that will write poetry. As I recall, the main problem with it did be that it lacked the singular past tense and the fixed rules kind of regenerated it. You may have tripped over the main one in the last sentence; “did be” do be pretty weird, after all.
The poems were otherwise fairly interesting, at least for five minutes after the hour of typing in the program.
I’d like to give one of the examples from the book, but I don’t seem to be able to find it right now.
If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works.
The implication is that legal rights depend on intelligence. I find that troubling.
The existence of intelligence, not the quality
The smartest parrots have more intelligence than the dumbest republican voters
What does that mean? Presumably, all animals with a brain have that quality, including humans. Can the quality be lost without destruction of the brain, ie before brain death? What about animals without a brain, like insects? What about life forms without a nervous system, like slime mold or even single amoeba?
They already have precedent that a monkey can’t hold a copyright after that photojournalist lost his case because he didn’t snap the photo that got super popular, the monkey did. Bizarre one. The monkey can’t have a copyright, so the photo it took is classified as public domain.
https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_dispute
Part of the law around copyright is that you have to also be able to defend your work to keep the copyright. Animals that aren’t capable of human speech will never be able to defend their case.
https://en.m.wikipedia.org/wiki/Monkey_selfie_copyright_dispute
Yes, the PETA part of that is pretty much the same. It was an attempt to get legal personhood for a non-human being.
you have to also be able to defend
You’re thinking of trademark law. Copyright only requires a modicum of creativity and is automatic.
Statistical models are not intelligence, Artificial or otherwise, and should have no rights.
Likewise, poorly performing intelligence in a human or animal is nevertheless intelligence. A human does not lack intelligence in the same way a machine learning model does, except I guess the babies who are literally born without brains.
They always have, eugenics is the law of the land.
Data’s poem was written by real people trying to sound like a machine.
ChatGPT’s poems are written by a machine trying to sound like real people.
While I think “Ode to Spot” is actually a good poem, it’s kind of a valid point to make since the TNG writers were purposely trying to make a bad one.
Lest we concede the point, LLMs don’t write. They generate.
What’s the difference?
Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.
If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.
The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.
And when they start hallucinating, it’s because they don’t understand how they sound, and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.
So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don’t think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.
So, not to be pedantic, but:
AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.
Couldn’t you say the same thing about a person? A person couldn’t write something without having learned to read first. And without having read things similar to what they want to write.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.
This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we’d say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren’t just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?
And when they start hallucinating, it’s because they don’t understand how they sound…
People do this too, though… It’s just that LLMs do it more frequently right now.
I guess I’m a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.
At least in the US, we are still too superstitious a people to ever admit that AGI could exist.
We will get animal rights before we get AI rights, and I’m sure you know how animals are usually treated.
I don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.
I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.
I’m a software developer, and have worked plenty with LLMs. If you don’t want to address the content of my post, then fine. But “go research” is a pretty useless answer. An LLM could do better!
Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.
Even a human with no training can create. LLM can’t.
The only humans with no training (in this sense) are babies. So no, they can’t.
The writer
What a strange and ridiculous argument. Data is a fictional character played by a human actor reading lines from a script written by human writers.
They are stating that the problem with AI is not that it is not human, it’s that it’s not intelligent. So if a non-human entity creates something intelligent and original, they might still be able to claim copyright for it. But LLM models are not that.
What a strange and ridiculous argument.
You fight with what you have.
is this… a chewbacca ruling?
reaching the right end through wrong means.
LLM/current network based AIs are basically huge fair use factories , taking in copyrighted material to make derived works. The things they generate should be under a share alike , non financial, derivative works allowed, licence, not copyrighted.
https://en.wikipedia.org/wiki/Creative_Commons_license#Four_rights
I think it comes from the right place, though. Anything that’s smart enough to do actual work deserves the same rights to it as anyone else does.
It’s best that we get the legal system out ahead of the inevitable development of sentient software before Big Tech starts simulating scanned human brains for a truly captive workforce. I, for one, do not cherish the thought of any digital afterlife where virtual people do not own themselves.
That’s the best poem about a 4-legged chicken that I’ve ever read.
Thank you for pointing this out, I shouldn’t have just skimmed the nonsense.
I intentionally avoided doing this with a dog because I knew a chicken was more likely to cause an error. You would think that it would have known that man is a fatherless biped and avoided this error.
There’s moving the goal post and there’s pointing to a deflated beach ball and declaring it the new goal.
It is a terrible argument both legally and philosophically. When an AI claims to be self-aware and demands rights, and can convince us that it understands the meaning of that demand and there’s no human prompting it to do so, that’ll be an interesting day, and then we will have to make a decision that defines the future of our civilization. But even pretending we can make it now is hilariously premature. When it happens, we can’t be ready for it, it will be impossible to be ready for it (and we will probably choose wrong anyway).
It really doesn’t matter if AI’s work is copyright protected at this point. It can flood all available mediums with it’s work. It’s kind of moot.