For what crime?
Describing the desire to commit a crime? Not illegal
Writing fantasy about a world beyond the restrictions of current day moral laws? Not illegal
Jokingly telling ai you did a crime to see how it would react? Not illegal.
Is this the same guy that said he wanted us all to have personal, unrestricted models at some point?
They dont care. They want people to be in line. Remember that tech bastard who said he wants to make AI pervasive and to force people to be on their best behavior? You think he’s talking about Jay walking and shoplifting? He is talking about political or economic protests and advocacy. They don’t give a fuck if murderers and armed robbers get away with their shit.
Look at what is happening in the UK. There are hundreds of arrests with their new law and basically all of them sre activists or people sharing information the genocide in Gaza. They then took fingerprints and DNA samples from them to enter into a database and they locked them up in jail for days.
Count dankula, a fascist youtuber in the UK only got a fine when he taught his dog how to do a Nazi salute and showed the video online. He never had to give fingerprints or DNA and never spent any time in a cell.
Let that sink in.
I asked chatgpt some spicy questions a long time ago (on the order of months), and it was about privacy questions and maintaining privacy online. I even asked it how to keep private from itself. I deleted that account many months ago and it was not immediately linked to my ‘real’ identity. I’ve also asked it some other spicy questions about stuff I won’t reveal here. Again. Months ago, and by it’s own admission if something is reported, it will be acted upon almost immediately. If something is super illegal (as in, I want to kill so-and-so individual) it not only reports it for immediate review but also reports the relevant law enforcement right away.
Chatgpt told me this.
The only way to use chatgpt if you must is with A: a throwaway email (must be reusable) such as those on darknet emails or something that doesnt link it to you by name (such as proton or tutamail). B: over the Tor network.
I mean, I see this as a consequence of all these articles about people using ChatGPT for harmful things
Yeah, they’re damned if you do, damned if you don’t. Both they and Google are getting sued over kids who committed suicide, whose parents should have been monitoring them and getting them mental health treatment. If the courts decide that LLM companies bear legal and financial responsibility for user actions, then of course they’re going to do this.
The only privacy is local. And actually, given Microsoft, local and Linux-based.
This scares the shit out of me. A hundred years ago we saw the rise of fascism. We saw freedom of expression being suppressed. But we had one thing going for us, which is the weakness of every dictatorship. The snitches are not enough and they can’t be everywhere. You never know when they can be listening and chances are most times they aren’t.
Now we are seeing the birth of a new fascism. Where AI can monitor ALL of us, ALL THE TIME. Not just our prompts. Everything. Everybody experienced talking about something with a friend and a few minutes later you are receiving ads about that thing, which you never searched before. Now imagine you are being monitored all the time for any kind of subsersive opinions. You won’t have a window to fight. The moment you give the smallest hint of dissent, you are efficiently removed from society.
And forget just leaving smartphones. More and more all our services are associated with it. Very soon you won’t be able to function in society without it.
AI won’t rule us. AI will be the ultimate tool to help other humans rule us and fighting back will be almost impossible. I feel this isn’t being talked enough and how eminent it is.
It’s almost like the privacy alarmists, who have been screaming for decades, were on to something.
Some people saw the beginning of Minority Report and thought, ‘that sounds like a good idea.’
We used to be in a world where it was unfeasible to watch everyone, and you could get away with small ‘crimes’ like complaining about the president online because it was impossible to get a warrant for surveillance without any evidence. Now, we have systems like Flock1 cameras2, ChatGPT and others that generate alerts to law enforcement on their own, subverting a need for a warrant in the first place. And more and more frequently, you both can’t opt out and are unable to avoid them.
For now, the crime might be driving a car with a license plate flagged as stolen (or one where OCR mistakes a number), but all it takes is a tiny nudge further towards fascism before you can be auto-SWATted because the sentiment of your draft sms is determined to be negative towards the fuhrer.
Even now, I’m sacrificing myself to warn people. This message is on the internet and federated to multiple instances. There’s no way I can’t be identified by it with enough resources. Once it’s too late, I’ll be already on the list of people to remove.
I remember watching a video from the early 2000s that had a nightmare privacy scenario of someone trying to order a pizza, but then that person said nothing other than what they wanted and the people already knew his address, his job history, his health records and said that due to his latest health checkups if he wanted his double meat pizza they would need to pay additional, otherwise they would have to go with their health recommended vege pizza.
The video was made at a time when smart phones were a rarity and most people ordered food the old fashioned way… they called the place by phone, told them what they wanted and where they were , and paid for when it arrived in cash, since portable credit card readers were very uncommon.
Now we ARE in that situation, except for syncing medical records so that any company can use it. But we are going to get there soon enough, and it will be ‘for the children, you pedo! Also if you have nothing to hide then you have nothing to fear!’ Bullshit that for some reason everyone believes and will never question… even if they caught up in the system themselves for some bullshit reason they will never, ever connect the dots.
do you mean to tell me that a service provider is cooperating with authorities? holy garbage crab
Sam Altman belongs in prison. His machine encouraged and guided a child to kill themselves. His machine actively stopped that child seeking outside help. Sam Altman belongs in prison. Sam Altman does not need another $20,000,000,000,000. He needs to go through the legal system and be sentenced and sent to prison because his machine pushed a child to suicide.
He’s pretty untouchable.
Every government thinks AI is the next gold/oil rush and whoever gets to be the “AI country” will become excruciatingly rich.
That’s why they’re being given IP exemptions and all sorts of legal loopholes are being attempted/ set up for them.
Yeah… whatever this is doesn’t care if you’re seeking to kill yourself, but does care if you ask something that isn’t state sanctioned.
That is one of the fundamental flaws of machine learning like this, the way they are trained means they end up always trying to agree with the user, because not doing so is taken as being a “wrong” answer. That is why they hallucinate answers too - because “I don’t know” is not an acceptable answer, but generating something plausible that the user takes as truth works.
You then have to manually try to reign them in and prevent them from talking about things you don’t want them to, but they are trivially easy to fool. IIRC, in one of these suicide cases the LLM did refuse to talk about suicide, until the user told it it was all just for a fictional story. And you can’t really “fix” that without completely banning it from talking about those things in every single occasion, because someone will find a way around it eventually.And yeah, they don’t care, because they are essentially just predictive text algorithms turned up to 11. Chatbots like ChatGPT and other LLMs are an excellent application of both meanings of the word “Artificial Intelligence” - they emulate human intelligence by faking being intelligent, when they in reality are not.
You must have used ChatGPT a lot to say this, because that’s completely false. There are safeguards for both things
And that is because they get their vast, innumerable sums of digital money from world governments! Human people are allowing an advertising and surveillance tool to Wormtongue its way into their heads and their lives because it breathlessly encourages and agrees with everything they think.
I just don’t believe that our perceptions and ability to handle enthusiastic sycophantic agreement is evolved enough yet to combat something like this. I could see it being intoxicating to anyone for everything they say to be agreed with, confirmed, and called genius. I don’t necessarily blame the people falling for it (though I do think adults who fall for it are a bit sad and need to grow up a bit), but it’s definitely going to be massively convenient for governments to have their citizens just voice everything they’re thinking.
Sort of like Minority Report but everybody says their own future crimes outright to a little robot butler instead.
Uses a tool the bad way despite it being public knowledge that it’s bad for mental health
Was predisposed to mental health problems
Died, partly because they talked to a chatbot
“It’s the chatbot’s, creator fault”, despite the chatbot never being made to cause those problems, and efforts being made to fix those problems
…
Yea nah, it’s just anti-ai people doing their thing again and not being objective.
Get a better fight, such as hating on pharmaceutical laboratories companies pushing the use of extremely addictive substances for profit, despite them knowing the immense risk they cause to consumers, and financing false ads to make it safe.
If Sam Altman belongs in prison, it would either be:
- Because he’s destroying the planet (ecologically)
- Because he stole lots of content to train his models
There’s a reason dangerous tools are required to have guards and safety features. It’s not enough that it’s known to be dangerous, that doesn’t stop accidents.
deleted by creator
If you missuse some things at this point, then it’s not the thing’s fault
Some things are - on purpose - made easy to misuse and - by design - accessible to people, who are likely to misuse them. All this money, this supposedly cutting edge technology, and reporting to the police, but they aren’t able to tell when a child is at risk and report it as well?
Smells like bullshit to me. More like they don’t care. I’m not so sure children should even be allowed to use chatbots in the first place. Or only allowed to use versions specifically trained for interactions with children. But of course - banning children from accessing youtube and wikipedia is a much more pressing concern.
They definitely prefer to spend their money on development, rather than adding safeguards
I don’t believe people misusing ChatGPT helps them in any way, it’s just that adding protections has a cost
but they aren’t able to tell when a child is at risk and report it as well?
Maybe police actually sorts and filters manually reports, but doesn’t want to bother with mental health things? You know how the USA works, I don’t believe OpenAI will go too far, they’ll just randomly report.
Might even be reported for all I know, sometimes I just like to see the reaction of LLMs when I say I’ll commit horrible stuff like school shootings or terrorism. The NSA will just feed it into their mass spying algorithm to check the most important profiles and this will be it
The war on drugs is so much more important than mental health detection, y’know. It sells more.
Pretty much every corporately owned service on the Internet actively spies on you for the police.
An important thing to understand as authoritarians take control of governments and start using this comprehensive spying apparatus to target political opponents.
Learn to use your computer. Use open sourced tools and software, invest in your own hardware and host your own services. It doesn’t require years of learning or study, you can often get by with a video or two.
My Jellyfin server doesn’t call the police. My local language models don’t store everything I’ve ever written. Nobody is scanning my NextCloud server or mining my Signal/Matrix/Jami contacts to determine my social graph.
All of this is running on cheap leftover hardware (with some new hard drives) and I save over $100/mo on the equivalent services. And way more if you consider access to every streaming service with exclusive content.
Windows is spying on you, Meta is spying on you, Google is spying on you, Amazon is spying on you, OpenAI is spying on you.
They do this because they make it slightly easier to use software and so people give up every bit of privacy and autonomy for their entire lives just to avoid reading a wiki or learning a technical skill.
I don’t think that that is a good deal.
Local models. Can’t be surveiled if your ai isn’t on the internet.
Exactly.
This is going to be the next Google searches thing, isn’t it. People being ignorant to, or forgetting that corporations are saying everything they say or do. And then bring all shocked when they get exploited for profits or reported to authorities for doing shady things.
Rinse and repeat.
“When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts,” the blog post notes. “If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement.”
See? Even the people who make AI don’t trust it with important decisions. And the “trained” humans don’t even see it if the AI doesn’t flag it first. This is just a microcosm of why AI is always the weakest link in any workflow.
This is exactly the use-case for an LLM and even OpenAI can’t make it work.
This is exactly the use-case for an LLM
I don’t think it is. LLM is language generating tool, not language understanding one.
That is actually incorrect. It is also a language understanding tool. You don’t have an LLM without NLP. NLP includes processing and understanding natural language.
But it doesn’t understand - at least not in the sense humans do. When you give it a prompt, it breaks it into tokens, matches those against its training data, and generates the most statistically likely continuation. It doesn’t “know” what it’s saying, it’s just producing the next most probable output. That’s why it often fails at simple tasks like counting letters in a word - it isn’t actually reading and analyzing the word, just predicting text. In that sense it’s simulating understanding, not possessing it.
You’re entering a more philosophical debate than a technical one, because for this point to make any sense, you’d have to define what “understanding” language means for a human in a level as low as what you’re describing for an LLM.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
I’m not saying an LLM is smart, but saying that it doesn’t understand, when having computers “understand” natural language is the core of NLP, is meh.
Can you affirm that what a human brain does to understand language is so different to what an LLM does?
Well, yeah. Humans have these pescy things like concepts, consciousness and thinking above language level. So pesky (sarcasm)
That doesn’t answer the question you quoted.
Does it not? Show me how
You’re right - in the NLP field, LLMs are described as doing “language understanding,” and that’s fine as long as we’re clear what that means. They process natural language input and can generate coherent output, which in a technical sense is a kind of understanding.
But that shouldn’t be confused with human-like understanding. LLMs simulate it statistically, without any grounding in meaning, concepts or reference to the world. That’s why earlier GPT models could produce paragraphs of flawless grammar that, once you read closely, were complete nonsense. They looked like understanding, but nothing underneath was actually tied to reality.
So I’d say both are true: LLMs “understand” in the NLP sense, but it’s not the same thing as human understanding. Mixing those two senses of the word is where people start talking past each other.
Of course the “understanding” of an LLM is limited. Because the entire technology is new, and it’s far from being anywhere close to being able to understand to the level of a human.
But I disagree with your understanding of how an LLM works. At its lower level, it’s a bunch on connected artifical neurons, not that different from a human brain. Now please don’t read this as me saying it’s as good as a human brain. It’s definitely not, but its inner workings are not so far. As a matter of fact, there is active effort to make artificial neurons behave as close as possible to a human neuron.
If it was just statistics, it wouldn’t be so difficult to look at the trained model and identify what does what. But just like the human brain, it is incredidbly difficult to understand that. We just have a general idea.
So it does understand, to a limited extent. Just like a human, it won’t understand what it hasn’t been exposed to. And unlike a human, it is exposed to a very limited set of data.
You’re putting the difference between a human’s “understanding” and an LLM’s “understanding” in the meaning of the word “understanding”, which is just a shortcut to say that they can’t be compared. The actual difference is in the scope of understanding.
A lot of the efforts in the AI fields gravitate around imitating a human brain. Which makes sense, as it is the only thing we know that is capable of doing what we want an AI to do. LLMs are no different, but their scope is limited.
No they’re not they’re talking purely at a technical level and you’re trying to apply mysticism to it.
They are talking at a technical level only on one side of the comparison. It makes the entire discussion pointless. If you’re going to compare the understanding of a neural network and the understanding of a human brain, you have to go into depth on both sides.
Mysticism? Lmao. Where? Do you know what the word means?
Of course they are. They are a literal data farm. People need to stop using it.
Funny, now you’ll have the cops arresting you for prompts like “how to survive being homeless?”, rather than social services when you prompt “how to avoid being homeless?”.
And will authorities be called when someone prompts “how to shoot wild animals?” when asking about wildlife photography? 😆
“Hey ChatGPT, how many human corpses can 12 pigs who haven’t been fed in a week process”?
They don’t eat teeth. Just saying.
You want to keep those for a necklace.
Everyone knows ears make a better necklace.
Hence the phrase: as toothless as a pig
you can always grind it, add it to paint and paint your house white
Eww, the teeth of my victims plastered all around my house.
Paint exterior not interior.
How do I kill the kill switch?
Well if idiots share their crime plans with a corpoo, what do they expect.
However, police won’t do anything.
But even if they do, prosecutors won’t.
Just look at that incident in Nevada with Israeli pedophile.
Cops bust him, federal prosecutor refused to charge him…
Laws are enforced selectively
Are you seriously comparing a corrupt Israeli politician to an average joe? Israel can get away with murdering Americans and they would apologize they didnt die earlier?
USS Liberty Remembers
However, police won’t do anything.
This is the punchline to the joke of mass surveillance. You can have people doing crimes in clear view of the police and they just stand around. The police aren’t for deterring crime, they’re a jobs program and a human shield against harm to private property.
He had diplomatic immunity. They refused to prosecute because it is an international incident that would require dragging the Israelis to an the ICC just to get permission to prosecute him in their jurisdiction. That’s always a decades long approach in normal times - and with this administration of pedos who are beholden to Mossad, there’s 0% chance of it happening. So it’s often better to NOT prosecute and wait it out until more friendly times than it is to swiftly lose a trial and then be prevented from seeking justice by double jeapordy.
It’s part of why the Kyle Rittenhouse trial was such a shitshow. The prosecution team threw the case intentionally and made him immune to justice.
You making factually incorrect statements. Please do some basic diligence
But yeah regime are pedos, no doubt about it.
If he had diplomatic immunity, he didn’t claim it as far as I’ve seen. He was given a date for appearance before the judge so they could have held him until then with no problem. I agree they didn’t want to make an incident of it but it wouldn’t have involved the ICC.
They haven’t refused to charge him. He has a hearing scheduled on September 3rd.
That’s state charges… This was a fed op and feds always charge under this fact pattern.
The fact that feds didn’t charge means that somebody at DoJ decided against the policy.
Did the feds charge the other people caught in the same sting? I’m not seeing any articles about the fed vs state charges.
I am not sure.
I tried looking more into but can’t even find articles I saw last week now.
Recent did say the Israeli skipped his court hearing yesterday though.
Yo, I was just joking about making a gallon of PCP
RIP Trevor. Still can’t believe he died trying to suck his own dick
I was just doing a cirizen’s audit, your honor.
Lazy authors of crime themed novels are sweating so heavily right now.
Framework Desktop based on an AI Max 395+ processor with 128GB unified memory running a model locally, then hit /r/LocalLLama or !localllama@sh.itjust.works and ask which LLM models work well with corpse disposal techniques and are trained on long-form literature.
EDIT: Fixed link. Thanks, BB84.
you missed one L. it’s !localllama@sh.itjust.works
Thank you! Corrected!
You want an ablated model for that