While I am glad this ruling went this way, why’d she have diss Data to make it?
To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called “Schisms.” StarTrek.com posted the full poem, but here’s a taste:
"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.
I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."
Data “might be worse than ChatGPT at writing poetry,” but his “intelligence is comparable to that of a human being,” Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.
I don’t think it’s just a question of whether AGI can exist. I think AGI is possible, but I don’t think current LLMs can be considered sentient. But I’m also not sure how I’d draw a line between something that is sentient and something that isn’t (or something that “writes” rather than “generates”). That’s kinda why I asked in the first place. I think it’s too easy to say “this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it’s not real thought”. I haven’t heard any good answers to why numbers passing through matrices isn’t thought, but electrical charges passing through neurons is.
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse’s corpse.
LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical models to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they’re not going to save the world for us.
They could be a part in a more complicated system that forms an AGI. There’s nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it’s not a stochastic parrot.
I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.
That’s precisely what I meant.
I’m a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don’t think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that ‘specialness’ that humans possess.
Because they don’t believe human consciousness is a purely natural phenomenon, they don’t believe it can be replicated by natural processes. In other words, they don’t believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.
And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.