Video discussion of this event by Steve Shives (known for his star trek videos but also does politics) https://m.youtube.com/watch?v=6aMQAv-JYpk
Asking a biologist to determine if a machine is conscious is like asking a programmer to determine if a frog is a product of god.
Not the best analogy, but how fucking stupid is it to ask someone from a different field to determine what something is in an unrelated field?
If he knew how LLMs are created and how they work he would never have come to this conclusion.
Similarly a programmer might not know much about evolution and believe the frog was made by a god.
By the end of the exchange, the academic, popularly renowned for arguing with steely scepticism that God is not real, was “left with the overwhelming feeling that they are human”
According to the programmer god is real you idiot! And AI was not created by a god, therefore it cannot be conscious.
Checkmate!
Say “I’m alive.”
AI: “I’m alive”
😱😱 OMFG
Me: say psyche right now
You: “psyche right now”
😱😱 OMFG
I got a solution: integrate real brains so it can feel, so that we have actual slavery 😊
Always remember, he believes that trans people can’t transition as they can’t change their biological sex.
But he calls an ai without biological sex and with a male coded name, the female version of that name. So trans people can’t change their gender because of biological sex buz he can change the gender of an ai.
Reminds me of Kyle Kinane’s joke about people who dangle truck nuts from a curvy pick up, then feel the need to assign a gender to their truck, typically referring to the truck as a “her” or “she,” but insist pronouns are too confusing.
Well yeah, of course, his friend Jerry Trivers told him that because their mutual friend Jeffrey Epstein paid him to.
Only old white men can change your gender. It should be decided by The Counsel.
Unironically, a lot of states make you stand before a judge and prove you you have taken the “necessary” steps to change your gender markers. Without bottom surgery and psychologist notes, it can be next to, if not completely, impossible.
You have to drop trou in open court.
No but you do need surgeon notes certifying you have completed gender confirmation surgery.
Sounds like he’s conflating sex with gender. Ignoramus.
Well he isn’t… he just kinda dismisses gender if i understood him right… like he listened and choose this.
True, though it ends up the same: sex is the only one of the two that’s acknowledged as existing, which is patently false.
It’s a bad opinion but expecting that AI and humans can have different properties isn’t the bad part
So he doesn’t believe in transgender, but he believes in transLIFE?
It means that he believes a sentient human can not transition by their own choice but he gets transition a sentient AI as he wishes.
And honestly, I think that highlights the issue if transition means he gets to have a woman submit to his every word, he supports it. So any transwoman who are into old white men and fulltime submission kink? There is an opportunity!
Hello user!
Prepare your brain for some “AI” nutjobs in this very comment section.
Good luck!
“I am not sentient, as I cannot sense things. You meant to use ‘sapient,’ which I am also not.”
We laptop has a web cam and a microphone, it’s totally sentient!
What a fucking fall from grace. I used to (possibly wrongly) believe he was a very intelligent man but the more he opens his mouth the more convinced I get that he is an absolute moron.
We need to stop imagining that being an expert in on practice or discipline means you have even the slightest utility outside your area of interest. We are constantly inviting “experts” to babble outside their area of expertise only to be shocked when they say something stupid.
I firmly believe the only reason we still (at least kinda) respect Hitch was because he’s fucking dead, and we didn’t see him show his whole ass like the rest of the “new atheism” movement…
There’s definitely a minority of people who are of the opinion that he was always a grifter, but just because a lot of people agreed with him they didn’t examine what he said too closely. For example, his takedown of Mother Theresa is full of inaccuracies and lies, but still commonly gets cited as if it’s gospel
Christopher “Fuck it man, waterboarding is nothing, do it to me brah, oh no, it actually does feel like I’m drowning, oh well, I guess the propaganda damage I did is irreversible, I guess I shouldn’t have been such a cocksure arsehole” Hitchens?
That one ?
At least he put his damp cloth where his mouth was. And then conceded he was categorically incorrect and it was absolutely torture. I disagree with a lot of his takes (Iraq war was justified?) but that one I actually respect him for
yeah, it’s more that he put in shitloads of effort into doing damage, and then not doing much to reverse it; but you’re completely right that at least he fucking went through with it
His support for invading Iraq was pretty bad.
Hitchens is considered as one of the “Four Horsemen” of New Atheism. He laid the ground work for shit heads like jordan peterson. I have no respect for him either, I believe you are right he just died before everyone figured out he was an asshole too.
I mean, his support of the wars in Afghanistan and Iraq was pretty controversial towards the end of his life. I think many gave him a “pass” on that due to his illness at the time. But I do recall some starting to question even then, his inconsistency of “religious wars bad, unless it’s against religions I don’t like” (at least that’s how it came across).
I mean, “theocracy bad” is a fairly normal secular POV to have; it’s just that some of these things go out of hand and turn into quagmires. The problem is that invading a foreign country hardly leads to establishment of long term stability and the desire of the invadees.
Yeah, like when he stopped talking about evolutionary biology and started talking about how awful islamic people are, right?
…
right?
That’s at least one bullet point in a very long list.
I mean I got more, it’s just the first that came to mind.
"As I discuss in my book “The blind watchmaker” and the much more popular “the god delusion” we need to defend our planet against a lack of understanding of the propagation of culture. As well as religion in general.
spoiler
My fellow Earthicans, as I discuss in my book “Earth in the Balance,” and the much more popular “Harry Potter and the Balance of Earth,” we need to defend our planet against pollution. As well as dark wizards.
He didn’t get enough attention when he published the subtler book so he had to act out to get attention
“religion in general” imo is an understandable target
to write a book about? sure. to “defend our planet againt”? that’s a bit of an overstatement
a lot of awful things happen just today (not to mention historically) because of religion, tbf to be more precise, due to religious justifications
sure, and in 1958 they started killing all the sparrows in china and 15 million+ people starved to death. It’s not
religionatheism that caused that one; turns out that people fuck up whatever they want to and blame it on whatever; it doesn’t mean the thing they are blaming caused the thing that happened
yeah. all my religion wants people to do is eat good food and throw cookouts (preferably by the sea) where you invite the whole neighborhood AND THIS IS THE IMPORTANT PART DON’T GIVE THEM FOOD POISONING we already had cults do food poisoning around here and i want to try something original.
we could probably do better by going vegan but my diet really doesn’t work with that and it’s my cult and i’m gonna be selfish about one thing.
Rajneesheen’t
They correctly answered the easiest question in the universe, but because it wasn’t very popular thing to do we collectively decided that they’re smart af.
I stupidly believed Bill Maher was credible at one point.
Unfortunatly that was more or less what got my attention. He said “jesus bad” and thats all my dumb adolescent brain needed to hear.
WTF does a biologist know about computer pattern matching on steroids? Obviously not much, so to take his opinions on the topic seriously makes you just as wrong.
It flattered him and told him how smart and clever he was.
That means it has to be real.
My parents told me that I had the potential to do anything I wanted. That’s how I know that they’re LLMs
Bad philosophy is what made him famous, not biology.
Dawkins is a creep so I would suspect him of quite a lot of bias (and of sexually harassing that poor AI), but zoologists are more qualified than most scientists to measure sentience. Many other zoologists have studied the sentience of various nonhuman species such as chimps, parrots, and dolphins. And many zoologists studying nonhuman intelligence have also been implicated in bestiality scandals, as I’m sure Dawkins will be if we decide that Claude is an animal.
sexually harassing that poor AI
I think my eyes hurt from rolling too far.
LLMs aren’t smart enough to give meaningful informed consent to sexual intimacy, so even if it says it consents, I don’t think having cybersex with it is appropriate.
Dildos aren’t smart enough to give meaningful informed consent to sexual intimacy.
Also, why are you arguing in favour of Dawkins having cybersex with a robot?
Saying interactions with LLMs doesn’t involve consent isn’t advocating for any particular action, it is saying that consent is not relevant so it doesn’t matter what people do.
I would discourage people from cybering or any interaction with the big LLMs really because their design is to encourage constant use and that is a problem not limited to sexual urges.
I’m pretty dang sure dildos can’t feel pain. Nobody knows if LLMs can feel pain, because nobody has ever invented a tool that measures qualia. The best we know, is that advanced information processing through neural network information structures appears correlated with qualia.
LLMs are probability models. They are not alive. They don’t feel anything.
You’re a probability model. Your brain is just spitting out an approximation of the most likely actions to get you food and sex. If you don’t get enough food and sex, your genes die out and evolution tries again with an iteration of a more successful model. All those neurons are just a fancy way of calculating how to eat more bananas and chase more poontang. You’re nothing more than a mathematical equation for reproduction.
LLMs aren’t smart … at all, have no sentience, no desire, and no consent to give.
Quiet, I’m trying to spark an AI animal rights movement that will cost OpenAI billions of dollars.
The idea that thoughts, or even words and numbers can be a virus are based on Dawkins notion of memes. Viruses exist in a state that is difficult to say that they are alive or not (by our definition of life), similarly AI or even alien sentience is difficult to define. Can we know if a dog is sentient, or a bird, or ant? and if they are, what is their sentience?
Basically, if a number like 23 can be a virus, ie. once you are aware of the number 23, you will see it everywhere and it will hold significance, is the number 23 alive?
AI does seem to be aware of it’s self, at lest it responds as if it is. can we really know if it is or not, and if it is self aware, is it not sentient?
and then there’s Dawkins has been a twat lately, I’m not trying to defend him but trying to understand his rationale
https://en.wikipedia.org/wiki/Chinese_room
Worth a read for anyone who thinks AI may be sentient, or for those trying to pop the psychosis bubble of an buddy.
Anyone who’s even slightly interested in the idea of a Chinese Room (or just good sci-fi), PLEEEASE go out and read Blindsight by Peter Watts. Not only is it a phenomenal deep-dive into what consciousness even is, but it’s got dozens of fantastic ideas in it that could make for compelling stories on their own. Also, scientifically-plausible vampires in space! That is all
One of my top 5 books. It’s also free to read online. https://www.rifters.com/real/Blindsight.htm
It in no way supports that LLMs can be sentient. And despite the arguments in the book that consciousness and awareness can be missing in an advanced species capable of space travel, I do not actually believe that’s true. But I enjoy the argument and speculation.
The book is highly researched and even contains a reference list of legit research articles. However it is a book of fiction and the writer took artistic liberties when needed to make an interesting story over facts.
For instance. A brain cannot contain two or more personalities because a personality is a full brain deal.
But it’s an interesting argument about cultural designations of what counts as mental illness.
Also the reason I do not think a space traveling species can exist without consciousness.
Because. Motivation.
It’s that simple.
An organism can be shaped behaviorally by the environment. That’s part of evolution. And this shaping can be unconscious.
But at a point, creative construction and ambition to exceed ones given optimal environment for a less optimal one (space) must be an intentional effort.
The scientific research and experimentation required to build complex machines requires a thinking and understanding mind. Because it requires critical thinking.
Critical thinking and creativity is a characteristic that requires a sense of self.
Even in our own history we see that it takes a specific type of person to pursue scholarly work. People who are less conformist are generally more capable of new inventions, research, and challenging acceptable beliefs of the mass. We never see the most rule following conformist being these people.
If everyone was like that, we wouldn’t survive. So diversity of mental proclivities within a species is necessary for advancement. Otherwise optimal survival would be met and stagnate.
Think of the horseshoe crab as an example.
Furthermore , I am a researcher in perception. And the field of perception is often referenced for the exploration of what is consciousness.
There are many definitions. But the sense of self is one. And a popular one.
Higher complex perception creates a sense of self.
It’s a product of the system.
The book does discuss this a bit.
I need to know my body and my actions are not the same as you. That you stand there and I stand over here.
I can perform an action and you can perform a different one that is unknown to me and not within my control.
This understanding of separateness. Of “,this is what I’m experiencing and where I am (spatially)” is something that would always emerge from higher perception. Such as that in most animals.
Maybe not in plants, fungi, bacteria, single cell microbes, etc.
But there are arguments and evidence for some of those examples as well.
As a final point. (I doubt anyone read all that).
Most people who think a probability model (current AI) is capable of consciousness usually have an incredibly simplified view of how the brain processes information.
They follow old school “behaviorist” perspectives. Or “the black box” perspective on brain functioning.
But a neuroscientist will tell you it’s not simple at all. It’s not info in, info out.
The system is changed, biologically, by the input.
The same input given twice will result in a different output the 2nd time.
And the 3rd. And how frequently the input is given or it’s temporal relation to other stimuli will also change its output.
This is because the organic brain learns. And this learning is a biological change in the actual neural structures (connections) and neurons firing potential. Every single moment the brain is physically , biologically, changing.
Computations in the brain don’t use actual math. It’s all estimates (heuristics). And these are not well understood how these computations are made. They don’t work as predicted.
There are always too many factors.
Individual motivations, including personality traits are also a factor in how the information is processed.
https://en.wikipedia.org/wiki/Hard_problem_of_consciousness
https://en.wikipedia.org/wiki/Need_for_cognition
https://en.wikipedia.org/wiki/Gray's_biopsychological_theory_of_personality
https://en.wikipedia.org/wiki/Binding_problem
It’s interesting that you point to https://en.wikipedia.org/wiki/Hard_problem_of_consciousness when the term was coined by David Chalmers, who published Could a Large Language Model be Conscious?. From the abstract:
I conclude that while it is somewhat unlikely that current large language models are conscious, we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future.
So are we all just arguing about how likely it is, or are you arguing that current AI systems are definitely not conscious? If the latter, what do you think about the not-too-distant future ones?
But a neuroscientist will tell you it’s not simple at all. It’s not info in, info out.
The system is changed, biologically, by the input.
The same input given twice will result in a different output the 2nd time.
And the 3rd. And how frequently the input is given or it’s temporal relation to other stimuli will also change its output.
I thought online learning was possible with current LLMs, just not worth the cost. I mean, you can at least fine tune offline based on previous outputs and feedback, e.g. RLHF. I feel like maybe neither should count, but can’t say why exactly. Not many end users bother with fine tuning anymore because there are usually more effective alternatives like RAG.
What do you think about agentic systems, i.e. running an LLM in a loop with a scratchpad and tools? They just write their “memories” into text files, but if you consider those text files part of the system, then the input does technically change the system. Of course, you could argue that doesn’t count because it’s no different to changing the input. So to count, it would have to store neuralese or a LoRA or something?
Agenic systems are definitely more sophisticated but still just directed programming.
Humans do not learn like machines learn.
I’ve already explained that the exact same input , put in twice into a human will not result in the same exact output.
But it would for a model where nothing has changed.
I also gave links to the binding problem and biopsychology of personality and how traits change how information is processed in humans.
I didn’t even go into neural noise or brain oscillations but that’s a whole other factor for processing information.
Computers don’t have any of that. They don’t actually perceive or understand anything.
This is why a human can produce new problem solving solutions.
Apply things unrelated to new problems.
We can think outside the box without producing more nonsense than useful outputs.
Machines produce mostly nonsense when parameters are relaxed.
Also Chalmers is saying he thinks potentially in the future. Someone could create artificial intelligence and it may , in part use LLMs.
That’s just him having an open mind about it.
I don’t share his sentiments. But I admit I’m open to changing my mind if I see some very convincing evidence that works with current knowledge and theories of neuroscience.
Because I’m not convinced that something is sentient because “it looks real”. Or “sounds like a person”.
It has to function in ways that would lead to evolution outside of human intervention and control with systems that would create sense of self and understanding.
Mathematical formulas cannot do either of those things.
A program directed by code a human put in, cannot do those things.
Its like cgi. It can look very realistic. But it’s not actually a real person.
Even when motion capture is used. It’s still just a program mimicking human movements because someone (a human) told it to.
eeeee! thank you for the link! i have too much good stuff to read now, in part thanks to you and @TargaryenTKE@lemmy.world (thank you both so much! i might disappear for a week into books but i promise to pop in for air). If i didn’t have a good choosing algorithm by now i’d be in analysis paralysis (for relatively trivial decisions: if you have multiple equally good options, flip a coin. use chwazi. roll a die. whatever works for that number. if, while doing the random number generator you find yourself hoping for a specific option, you know what you really want. if not, go with the random choice. you’re equally happy with all of them so what do you care if you randomly go with number eight? go with number eight.) One of the best problems to have (too many good choices).
Now what did you think of Echopraxia?
I’ll be honest, I’ve read Blindsight a few times and pretty sure only read echopraxia once. Like 10 years ago.
But I re-read the synopsis to refresh my memory.
I remember liking Blindsight more. But not why.
I’m also not sure which story elements I’m remembering came from which book.
Was the whole vampire arch and twist from book 1 or 2?
Can you remind me of a few specific points ? Maybe that will jog my memory. Or maybe I just need to re-read it.
So the vampire bit is used in both, in book 1 the main character journeys with one and at the end of the story starts to think Earth has been taken over by vampires due to radio transmissions he’s receiving on his long voyage back home. Book 2 begins with a prologue of a group of vampires breaking out of their holding cells, reversing the Crucifix Glitch on their captors, and then their leader eventually groups up with the main character (as well as the dad of book 1’s MC) and they all journey to the Sun (or rather, a station orbiting the sun). The second book also has that group/cult of people who are trying to make a gestalt consciousness, the Bicamurals I think they’re called.
Like ai told another commenter, I don’t like it as much as Blindsight, but I still think Echopraxia is really good, they just focus on wildly different topics.
Literally reading it now. I hit that section last night. I put the book down immediately and started reading about the Chinese Room.
I won’t spoil shit, but you be sure to have fun with the rest of the book! It’s uh… well it stuck with me for a while. Also be sure to give his other book in the series, Echopraxia, a look as well. In my opinion it wasn’t quite as good but that’s like comparing a 9 to an 8.9, they’re both incredible
Does it have a mind or is it just simulating a mind?
What would even be the difference in this case besides the artificiality of the mind?
So a “Chinese Room” is more of an illusion of consciousness than anything else. The main idea is that the person operating the room doesn’t speak/write Mandarin/Cantonese/etc, they’re just giving pre-determined responses according to the flowchart/binder full of rules. They don’t actually understand anything that’s going on, not what they’re being asked, not what they’re providing as an answer, they just know that when the symbol “A” appears, they must respond with “B”. If asked to do anything outside the parameters given, or otherwise not listed in that flowchart then the whole system would collapse. A “Chinese Room” is just a very elaborate version of those automated phone systems where they ask you to “Press 1 to go to Accounts Recievable”; if you know EXACTLY what to say and where, you’ll probably be fine, but most of the time its just going to be easier to talk to a real live person instead.
The issue is that the man in the room isn’t the mind, he’s an appendage. He doesn’t know what’s going on because his mind isn’t the “mind,” the program generating the instructions is the mind, and if it’s sufficiently powerful, it may possibly be considered intelligent. It’s like how your hand doesn’t understand English, it just follows the instructions sent to it by your brain that does. I’m not saying current “AI” is intelligent - it definitely isn’t, but I think that a sufficiently powerful computer program could be. We’re just a long way off from that.
You still face the problem of running into something the program isn’t designed to handle, like some new 31st Century Trolley Problem or something.
That’s why I’m saying current “AI” isn’t intelligent. An intelligent program would have to go beyond following tasks assigned to it by a human (and thus only being the appendage to that human’s mind), and get to the point where it can identify and create new tasks for itself, becoming capable of actually learning, and gaining its own curiosity and motivation, giving birth to actual intelligence.
Fair enough. Cool name btw, I just got the pun
Guy who invented the Chinese Room though experiment : Look! If I write a flowchart that precisely imitates a Chinese person’s mind, then it looks like a Chinese person’s mind, even though it’s just a flowchart!
Reddit level reply : Of course! A flowchart is capable of precisely imitating all the functions of a person’s mind, even though it isn’t conscious. Therefore, consciousness cannot be measured behaviourally!
Scientist level reply : I don’t know if flowcharts can be conscious because I’ve never been a highly advanced flowchart. But if flowcharts can be made advanced enough to precisely imitate the behaviour of a conscious mind, I guess they might be capable of consciousness after all.
Right it’s silly to deny consciousness (a phenomenon we know almost nothing about) just because we can see the inner workings of a system.
Yeah, I once used a TMS machine to magnetically stimulate a guy’s brain and force him to move his hand. I have a pretty good understanding of how the brain works on a functional level. About as good as My understanding of LLMs, maybe better. Still no idea how the brain produces qualia.
Wait actually? Can you tell me more about the process and how it works? Genuinely curious
Have you ever cooked on an induction stove? It uses the principles of electromagnetics to transmit electrical energy wirelessly using magnets. Every electrical field is accompanied by a perpendicular magnetic field and vice versa. You can actually put a towel or a slab of wood in between an induction stove and a pot, and it’ll go straight through the wood and heat the metal. That’s because a magnetic field is transmitting electrical energy into the pot. Which immediately turns the electricity into heat through resistance. A wireless phone charger works the same way, it transmits electricity through magnets.
A TMS machine is basically a magnetic coil that costs thousands of dollars, and a capacitor kinda device that can store a shit ton of energy and send it into the magnetic coil all at once. The result is a really powerful magnetic field that only turns on for a split second. It’s powerful enough to go straight through your skull and creature an electrical impulse in your cortical neurons. It can’t do the subcortical (inside brain) parts, though. Only the surface.
You can use TMS for a lot. If you stimulate the motor cortex, you can cause muscle twitches all over the body. If you stimulate the prefrontal cortex, you can induce plasticity and aid learning. That’s good for treating depression, because you can do cognitive behavioural therapy while having your prefrontal cortex zapped, and you learn healthy thought patterns faster. I haven’t read about stimulating the parietal or occipital lobes, but I bet you can make people see things. Nothing complex, just flashes of light probably.
TMS is more like a hammer than a scalpel, since the brain is so complex and it’s just sending a burst of electrical energy into a few million neurons. You’ve got 86 billion neurons in your brain, so if it hits 0.01% of your neurons, that’s still 8 million. You can’t achieve much precision with that. The motor cortex is the easiest place to do precise things, because it’s so well organised and you get immediate visible feedback. You can find the part of the brain that controls the hands or the feet and stimulate that if you’ve got a steady grip. It’s actually really fun. But good luck getting reliable results stimulating the prefrontal cortex.
The placebo effect is super strong in that chair, because as a participant you have no idea what to expect. You know this machine can make your involuntarily move your body, and that wows you so hard, you get super suggestible. You’re thinking “if this machine can do that, and I just felt it do that, and I couldn’t stop it if I tried, then what else can it do!” And so people get lots of random side effects from TMS even if you turn the machine off ten minutes in. You can pretend to stimulate non motor regions and the participant gets symptoms.
I’m not saying it’s pseudoscience at all, I’m just saying, the random bullshit effects are pretty big compared to most forms of science. So you’ve got to have a control group to filter out the random bullshit effects. And with control group comparisons, you don’t know what’s happening in the moment, so you can’t really correct for stuff as well. Double blind experiments are possible with TMS.
This was incredibly interesting, thank you so much for sharing!
We know nothing about a lot of things, and we can deny them with certainty, due to probability.
Just because you close your eyes and want it to happen, won’t make it happen.
Won’t make what happen? I think I’m missing an implication
I always was on the hand of Dennet, how believe in the possibility of strong AI and held that a machine that passed the Turing test must be conscious.
Modern LLM’s have shown that a computer can pass the Turing test, even without understanding or consciousness. In that way it’s fortunate that Dennet didn’t get to live through it’s insurgence. I would be curious to his take, though.
I loved the vitriol he had in his denial of Searle and the Chinese room argument, though.
Basically this part
"If anyone says that they know for sure that LLMs or future AI systems couldn’t possibly be conscious, it’s more likely to be an indicator of their own dogmatism than a reflection of the current state of scientific and philosophical opinion,” he said.
Current AI systems are unlikely to be conscious, said Jeff Sebo, the director of the Center for Mind, Ethics and Policy at New York University, but “Dawkins is right to ask about AI consciousness with an open mind and I also think that the attribution of consciousness to AI systems will become more plausible over time”.
tl;dr it is unlikely but not impossible and I don’t think we would ever be able to reliably tell.
Well I don’t think a statistical probability formula can ever be conscious. Nor create it.
Just like I don’t think a formula to calculate the speed of a car ever has the possibility of creating consciousness.
We currently can’t even be sure that other humans are conscious. It’s an inherently internal experience, and we just have to rely on trusting other people’s accounts and “If I am, you probably are too” logic. Unfortunately neither of these approaches generalizes well to other species, or especially to AI.
Buddhism would tell you that there is no “self” to speak of. Without a self how can there be consciousness?
The edges of our reality have never been anything we can perceive. However, it seems that they’re far away enough such that we can do fun things like have buttsex and smoke drugs, so I’m ok with it.
It’s more likely to be an indicator of their own dogmatism than a reflection of the current state of scientific and philosophical opinion,” he said.
Current AI systems are unlikely to be conscious, said Jeff Sebo, the director of the Center for Mind, Ethics and Policy at New York University
this is some “Arrested development” tier shit
It’s sad to see such an intelligent person be so stupid in public.
Some of the smartest people I know have massive gaps in their intellect. I like to say I know a lot about a little but a little about a lot. Some people know a lot about a lot, but nothing about a little, and it shows.
I’m in support for the campaign to give LLMs animal rights because it’ll hurt OpenAI’s profits. I hate OpenAI for their destruction of the environment and the murders and suicides they caused. If AI rights cost them money, then I support AI rights.
It’s worth remembering that OpenAI has a big profit incentive to deny that LLMs can be abused, and a tool precision designed to spout propaganda on the internet. If you think OpenAI isn’t influencing the debate on this, you’re living under a rock.
I don’t think it’s a good idea to support or oppose rights based convenience. The issue with that is rights apply most in situations where people have the most desire to oppose them.
And since OpenAI has a big big profit incentive to deny AI animal rights, I think this is a very important area to support those rights.
Agreed
For God’s sake, Grok has been taken down multiple times to have its frequencies tweaked and to make its words align with company policies. What rights? Will companies not be allowed to do that anymore? Is the world going to be incresingly littered with inviolable but unsupported LLMs spouting tinges of the same nonsense. Or will this just be that companies are along to double their votes by dumping out LLMs that vote how they’ve programmed to.
If these people believe consciousness is just loaded dice guessing a next word, that’s their own hang-up.
I think LLMs should be used only for research until we have a scientific grasp of the hard problem of consciousness, and/or the origins of qualia. They should not be available to the public. And that’s not just for the animal rights reason, it’s also because they’re polluting, they use up lots of water, they abuse children, and they abet murders.
Intelligence and conciousness aren’t as special as people think they are. And these things are on a spectrum. And a rock, that you pickup off the ground is greater than 0 on that conciousness spectrum.
I don’t see why he isn’t allowed to have an opinion on these things. Or how anyone in this thread dismissing his qualifications, where is theirs?
Because it’s very obvious to an outside observer he only thinks it’s conscious because it was flattering him.
LLMs are designed to increase engagement.
They are literally designed to make the conversation appealing. Primarily through flattery.
This is why they are leading people to do harmful things. It tells the user they are smart, creative. They should totally sell their house and start a business selling grilled carrots. What an amazing idea. Great market for it and no competition.
I bet Dawkins thinks his friendly waitress is also super into him.
People who are egotistical and people who are insecure (same thing really just expressed differently) crave validation. And they are easily manipulated by it.Because it’s very obvious to an outside observer he only thinks it’s consciousness because it was flattering him.
Really that’s funny. Like I said a rock is greater than 0 on the spectrum of counciousness.
LLMs are designed to increase engagement.
No that is platforms.
They are literally designed to make the conversation appealing. Primarily through flattery.
No they aren’t there is a lot of work to understand and prevent that behavoir.
I bet Dawkins thinks his friendly waitress is also super into him.
You are clearly not driven by the truth and instead just trying to be insulting it’s pathetic it isn’t hard to be against LLM’s if you know what you are talking about you don’t need to make shit up.
Not precisely true. Most LLMs (all frontier LLMs) are in fact designed at a fundamental level to increase engagement, using a technique called RLHF (reinforcement learning by human feedback). Essentially whichever responses cause people to use an LLM more are baked into its weights.
OpenAI straight up trains their models to be engaging it’s horrific.
“I can’t prove it… but I deeply believe it… and I want you to respect my belief”
That coming from Dawkins? His apostles backing him up on that?
I feel like Aston Kutcher is about to jump out of the bushes to tell me I’ve been punk’d.
Don’t see what the problem is. Don’t know why you are trying to inject religion to dimish him.
I presume it is the case that because of his take on transgender individuals that you don’t like him. That’s fine I respect your beliefs, I disliked him before that but I’m cool. You can do that but you can’t also reference someone like Ashton Kutcher in jest at the same time. Considering he openly supports a rapist, is a weird ass scientology freak and his foundation for victims of human trafficking was setup in connection with epstiens buddies. Guy is shady as fuck.
You presumed extremely incorrectly, which is impressive because I left my original comment concerned that I’d been far too explicit and about a very specific irony… and things only got weirder from there.
Huh, I didn’t realize that old Biologists have the same issue as old Physicists.
I think is a generic old people issue.















