GUYS, please, we just need to give them one more trillion of moneys and an ocean of fresh water and we will have an AGI next month!!!
Just imagine AI doing all the work for you, while you live a life of leisure as a homeless person!
Yeah, well this isn’t a democracy where people have a say in what happens in our society. Our feudal elite decides what will happen, so stop complaining.
It’s not AI. It’s LLMs that don’t actually think in any meaningful way. They just repeat what they have ingested. And was most mathematically likely.
That’s why imma pessimist about LLMs doing anything truly revolutionary. They’re another productivity tool to solve problems that shouldn’t exist in the first place and middle-managment loves it for the same fucking reason.
yup. a roided up eliza isn’t going to synthesize anything new. they can do some tasks, but it’s most certainly not artificial intelligence. and chaining a bunch of eliza’s together isn’t going to make them smarter (claw etc.,), much less make them reliable and useful.
dont you mean AI peddling is moving too fast, rather than the advancement.
Ai isn’t being used to better society. To improve lives. It’s being used to drain and make the Epstein class more undeserved money.
AI isn’t the problem, it is just an excuse to abuse and gaslight people. If AI didn’t exist, some other card would be played.
Instead of destroying the looms, we should take them over and make our own products. AI can be incredibly useful and might allow cottage industries and smaller communities to become strong enough to contest the powers above us. The big constraints is just the affordability of local hardware and the development of sufficiently powerful models.
Things are moving quickly, especially in the local AI space. Two years ago, fitting a 70b was difficult in my hardware, which had 4k context capacity, could take an hour to output, really sucked at calculating numbers, and was censored. Now a 122b can be uncensored, allow for 256k context, takes less than two minutes to output an lengthy response, and is much smarter.
What I am saying, is that we shouldn’t reject the power of AI. We should use it ourselves, and become the equals of the elite. If we foolishly abandon power, the wealthy will just continue bullying us.
we shouldn’t reject the power of AI. We ould use it ourselves, and become the equals of the elite.
Sorry but you’re delusional. Your chatbot girlfriend is not going to turn you or anyone else into the equal of Epstein or Elon Musk.
And I don’t want to be “the equal” of Nazi cockroach pedophiles. That’s a downgrade for me.
I agree and would add for others reason here that the Luddites issue was not the looms they destroyed, but the out of control inequality that the government was not addressing. We need to stop blaming AI as a society for job loss and instead get the governments to help with the transition which so far they have largely been inactive on.
Job losses are a consequence of AI spending, not productivity increases. This is an economic issue. Everyone with a little money is trying to ride the wave, regardless of the consequences, while big corporations are positioning themselves for long term dominance.
What do you mean by help transition society? Help society transition to what exactly?
It is outrageous what is happening with AI right now, I work for a large company that does contracts with the US government specifically for the VA. Not only did they just lay off a bunch of people but they just announced that we being required to use AI in every step of our workflow and they have decided that AI is so great they now have people who have never a day in their life been coders doing development work. The guy whose job it was to create and manage schedules is now being required to use AI to write code and ship it. These AIs are wrong so so so much its crazy that this is the direction we are going in. If you thought things were bad already its about to get way worse.
I am so deeply sorry to all the vets who will be struggling to get the healthcare that they need because of this. We don’t want to do this either but its clear as day they will fire us and replace us with any warm body regardless if that person has actual experience or not. I am looking to leave but the market is complete dog shit and Its been a struggle to get any kind of response for applications.
Is there a good use case yet?
Hmmm…
How about moderation of lemmy users based on suspected political affiliation according to an LLM?How about moderation of lemmy users based on suspected political affiliation according to an LLM?
link, please?
It’s hypothetical, hopefully.
I also did not save the links.
Do they use LLM to moderate? Reddit does and it doesn’t have context and it’s how I got permanent ban lol
One party claimed such but the (AI supporting) accused denied it.
reddit does use it, i suspect they are using googles version and or OPEN AI. thats why there has been so many AI generated messages after you get banned. reddit realized this(admin/spez) that its alerting people to its AI usage, they use to shadowban instead now.
the AI response from a sitewide ban usually goes like this: “Your account has been banned due to violaitons(s), please refer to the TOS”. Also it doesnt tell you what the ban is, so they kept nebelous enough that you cant appeal it.
Gpt-5.5 high is not half bad at coding
It’s not bad, it’s not good. It requires a lot of hand holding and context.
I just tried it to see if it could implement a ping scanner in python. It could, but only if it blocked the gui while running. That kind of thing is an intermediate level school assignment. It’s not even half bad, it’s maybe 15% not bad.
Development is moving along just fine IMO. It’s the application of AI that’s out of control.
Besides medical science, I see no use for AI. People make excuses about being “more accessible” for disabled people, but you could replicate those features without AI.
Its the equivalent of using a 80 lb sledge hammer for a penny nail. Swinging wildly and missing 99% of the time, hitting your own shins, but 1% of the time it worked so its definitely good and the right thing to do!
I don’t understand the question and I’m guessing people in the survey may not have either. Moving too fast as in using too many physical resources without first focusing on optimization or “OMG the robots are coming for my job!”? These are very different views on technology that could give the same answer.
Either opinion is valid for “too fast”.
Exactly, that’s what makes it an uniformative question.
It’s all opinion question. They’re trying to gather opinions and feelings, not measure quantitative data about each person themselves.
It’s just a survey writing thing. A good survey can focus on these subjective issues but produce potentially actionable results. This question is akin to asking do you think food is too spicy?
I’ve got some pessimistic views as to long-term AI concerns — I’m not sure that aligning advanced AI goals with human goals in the long run is a viable problem to solve. We may not be able to achieve Friendly AI. I could believe that.
But I certainly don’t think that AI development is “moving too fast”. Not really anything to gain in slowing down development. I remember Elon Musk proposing a six-month moratorium on development — that doesn’t make any sense, only would be something that you’d want to do if you had an immediate milestone that you believed that there was major risk attached to. In general, either AI is something that you should ban globally because it’s too much of an existential risk for humanity, and halt all development and enforce that halt, or you’d like to achieve it as soon as possible. We are not at a point where there is a consensus that that level of unacceptable risk exists and there is a global commitment to enforcing such a global prohibition.
I can believe that there might be an excess of infrastructure development in particular, that we might not have the research side moving as quickly as need be to support that. Like, we might be doing misallocation in buying a lot of specific chips without establishing that those chips are going to provide a worthwhile return. But in terms of the technology advancing…no, can’t agree there.
And…let me make it even more concrete. I’d say that there are basically two scenarios:
-
We establish that AI — for some definition of AI — is simply too dangerous for humanity to have. In that case, the right path is to ban AI globally. That means that nobody gets it. Some coalition of countries is going to have to be willing to attack anyone who tries developing it. In that case, what we have is effectively an arms control restriction baked into customary international law. It is not optional to participate. And, for all the future of humanity, we need to be willing to enforce that. It means that we need a viable verification protocol to ensure that nobody is developing it, as is normally the case for arms treaties. And everyone has to submit to that verification protocol.
-
We don’t. In that case, we want to develop AI sooner rather than later.
I am certainly not willing to say that #2 is the “right” scenario and #1 is the “wrong” one. But if we decide on #1, that comes with a lot of things that we need to be doing as a species. It’s not just going to be the pre-computer-era status quo persisting, where our limited state of technology was what maintained the situation.
EDIT: I’d also add that, just as that I’m not sure that Friendly AI is a solvable problem, I’m also not sure that it’s really viable to have a verification protocol where we can prevent development of AI. Past arms control treaties where I think that verification was likely much easier — it’s hard to hide development of major warships under the Washington Naval Treaty, for example, yet there were still parties evading restrictions — were not always successful. #1 comes with its own set of hard problems too. Are parallel compute processors legal? What about their development and production? Under what restrictions are they used? Is it possible to achieve advanced AI using CPUs (my guess is that it likely is)? If so, what new restrictions will need to be placed on use and access to CPUs? How will we identify entities building production facilities to build CPUs and GPUs? Will we need to track all existing CPUs and GPUs, to try to identify entities who might be stockpiling them? How will we monitor what the great stores of those out there now are being used for?
If we go with #1, that also entails a different world from the one that we live in today.
-
If most are against somethung, how can twice as many feel something else? Isn’t most more than half?
Twice as many are AI pessimists as AI optimists
Let’s say there are 10 AI optimists, which means there are 20 AI pessimists. There being more pessimists also tracks with the majority thinking it’s moving too fast.
It’s simple, Sergey. I think you got it the other way around.
Feeling it’s too fast and optimist/pessimist aren’t mutually exclusive.
While not mutually exclusive, you are limited by the total population of respondents. If 60% of people say it’s too fast, then would not it require 120% of that same population to double it?
(Most Americans say AI development is moving too fast) and (twice as many are AI pessimists as AI optimists)









