I do not understand why they simply don’t just copy Zuckerberg’s own AI. I know he isn’t 100% perfectly like a human, but still closer than this LLM shit. Is it the same problem as with Data in Star Trek and the person who built him died without leaving any notes?
Even if it had gone perfectly, what would it have proved? That with the magic of AI, anyone can make Korean inspired barbecue sauce, as long as they are in a well-appointed kitchen that happens to have the right amount all the ingredients of Korean inspired barbecue sauce all laid out in front of them. I mean, if you know to go get all that stuff, you pretty much know how to make Korean inspired barbecue sauce already.
Also there are zillions of recipes online that you can read or just have text-to-speech if you want them read to you… or you know… a youtube cooking video?
I wanted to comment on another post that had a video show how chatGPT screwed up a basic recipe live…
And now this. You know what this reminds me of? That incident back in the 90s when Bill Gates was presenting Windows 98 to the world and his OS bluescreened on live TV. No one who saw it or heard of it at that time ever forgot about it.
But the flaws of windows 98 were hammered out fairly quickly and it was a decent system (I hung onto it longer than most since it ran many 90s windows games much better than XP for obvious reasons).
With this? Despite far more money poured onto this than any OS ever had they have produced remarkably few decent results.
Rule #868: As an Evil CEO, I will make a point of holding at least three rehearsals to prevent having egg on my face.
The evil overlord list is eternal!
Steve Jobs figured it out, you just have guys parked outside the engineers’ homes, ready to kill their kids if the demo fails.
Watching tech bros clumsily hawk their “revolutionary society changing tech!” while they look like fools is hysterical.
They’re chasing that one moment in the early aughts when what they had actually blew peoples minds.
Time to quote Dan Olson again. This was originally written about NFTs, but just replace “crypto” with “AI” and it’s still 100% relevant:
When you drill down into it, you realize that the core of the crypto ecosystem … is a turf war between the wealthy and the ultra-wealthy. Techno fetishists who look at people like Bill Gates and Jeff Bezos, billionaires that have been minted via tech industry doors that have now been shut by market calcification, and are looking for a do-over, looking to synthesize a new market where they can be the one to ascend from a merely wealthy programmer to a hyper wealthy industrialist.
From the incomparable line goes up
Given what happened (it skipping to the next step in the recipe), this was 100% “prerecorded AI” and they started on the wrong track.
Maybe be can ask the twins for help since he stole their entire platform anyways lol! What a douche!
God I wish this dork would fuck off already, along with the rest of the AI bullshit currently making investors and other business-wankers the world over cum themselves dry. It’s fucking embarrassing.
At this point in his presentation, you might assume Zuckerberg would leave nothing to chance. But when it came time to demonstrate the Ray-Ban MetaDisplay’s unique new wristband, he opted against using slides and decided to try it live.
The wristband is what he called a “neural interface” – in a genuinely remarkable feat of technology, it allows you to type through minimal hand gestures, picking up on the electrical signals going through your muscles. “Sometimes you’re around other people and it’s, um, good to be able to type without anyone seeing,” Zuckerberg told the crowd. The pairing of glasses and wristband is, in short, a stalker’s dream.
Jesus christ.
The pairing of glasses and wristband is, in short, a stalker’s dream.
Ha. The buyer thinks they are the stalker
The guy became a billionaire from a ‘hot or not’ college website…
The wristband is what he called a “neural interface” – in a genuinely remarkable feat of technology, it allows you to type through minimal hand gestures, picking up on the electrical signals going through your muscles.
That would be genuinely a piece of hardware I might adopt if it’s actually working as well as normal keyboard with touch typing. And obviously it has to work locally like any HID without sending everything I type to Zuck or someone else.
Sorry. All your data belong to zuck.
I can wait until someone else than Zuck® offers something better.
I can wait until someone else than Zuck® offers something better.
We said that about the Portal. The successor to the PortalTV isn’t going to replace the units I have in the field.
From all reports it works amazingly well.
Very good and entertaining article
To a layperson, at least, it seems that consumer technology has long since entered an era of solutions in search of problems – particularly troubling at a time when the world is facing so many genuinely intractable crises. As entertaining as it is to watch our tech overlords flounder on stage, it raises bigger questions, such as: who exactly asked for this, beyond the billionaires cashing in? And: can we just not?
“can we just not?”
I feel this deep in my soul. Every day. Multiple times a day.
It’s bleak because of how hard this stuff is being pushed.
I got to laugh off the Metaverse because it flopped long before it could be forced down my throat. I looked askance at Crypto, but broadly avoided it without consequence. Now I’ve vendors injecting AI into their tech support service, and it isn’t something I can wave away anymore.
I’ve seen several “creators” pointing to AI overviews as “evidence” of things without ever fact-checking them (because if you were interested in facts you wouldn’t bother with them anyway). I have family and friends send me AI-generated bullshit day in and day out.
It’s especially infuriating when they send me “but ChatGPT says…” about something I’m literally an expert in. Like I do this all day every day and you’re taking the word of a chatbot over me.
I hate when people do that. The robot is wrong! All the freaking time.
Oh man, when people at work do it… ‘i asked claude and it said…’. never once has this been right, or even a fruitful avenue of enquiry.
Same, yeah. On top of that, my company even turned on Cursor BugBot to review our PRs and it calls out nonsense non-problems all the time due to lacking context
The difference is that crypto is better than ever. I tried to send an overseas wire (to a friend from real life) and my bank just told me security blocked it and I can’t push it through
No problem sending USDC, after some minutes it confirmed and he was able to withdraw
Crypto’s only use for me is making it universally easy to pay every tech/vpn company I deal with.
beyond that, people launder money, buy drugs and do darkweb shit with a minority of privacy folks harping privacytokens like Monero.
I’d love if Steam took BTC and people selling used items online embraced it. The grocery store? Ehhhhhhhhh. Nah.
The grocery store? Ehhhhhhhhh. Nah.
If my banks services were seamlessly replaced by a cheaper, faster crypto service then I would not complain.
Crypto is much faster than an international wire and cheaper for sending less than $5000
And the nightmare just keeps getting worse and worse.
Especially because apparently we can’t not.
I think of it as the Juicero era. Everything needs to be a subscription, with an app, selling your data and is barely functional. And in a society where the basics are getting more difficult. It’s like selling more convenience to people in the upper floors of the Maslow’s hierarchy of needs, while many people struggle to just get housing, heating, food and health.
Oh a new kind of advanced glasses, does it zoom or auto adjust to your prescription?
Reads article
Wait are they seriously trying google glass again? Why is it always the solution looking for a problem people the same as the supply side people. They don’t understand that demand is the real driver.
Nobody had a demand for an iPhone before it was released. People just wanted a better Nokia
what? i absolutely wanted a cell phone with a decent web browser on it in 2007, are you high?
My phone did have a browser, but you had to type in using the numbers. What I really wanted was a phone with a bigger physical keyboard
Their point was that nobody knew they wanted an iPhone back then.
- Why would I want a gimped web browser on LTE data rate when I have a laptop with Wi-fi?
- Why pay all that money for what is just a phone?
- Does this make better calls than a beater 3310? No? So what’s the point?
Back then the most feature packed phones were either some psuedo-gaming device, or a Sony Ericsson you acquired from Japan with no warranty and likely region locking.
I can’t just pull out my laptop at my shitty gas station job I had as a teen in 2007… I wanted an upgrade from my fucking slide phone’s shit browser. There was a huge market for it already. People KNEW they wanted it and it delivered.
yeah but nobody wanted google glass after it was released.
Talked to a guy recently that claimed ChatGPT has “an IQ of over 300”. Laughed hard, he got mad at me laughing.
Ask him how many "R"s are in Strawberry
Look, two Rs is accurate as long as you accept that AI knows ‘what you really mean’ and you should have just prompted better.
That drives me mad. “Oh, you don’t find AI that useful for developement? You should learn how to talk to it.”. Wasn’t that the point, that it would understand me?
Meh, that was the sales pitch. But name one tool in development that actually does what the sales pitch claimed. Knowing how to get useful info out of AI does involve knowing how to talk to it. Just like getting the most out of gitlab means knowing how they intend for you to organize your jobs. So AI is just like every other tool, overhyped, underdelivering, and has “some” use.
Git and even GitLab does its job quite well.
IDEs do A LOT of heavy lifting for many devs.
AI was supposed to boost productivity and eventually replace developers altogether.
One of those things is not like the otters.
What the snake oil people never bring up is that companies do try and replace devs with AI and prompts.
Then realize it doesn’t work as non-engineers don’t have the skillset to do it and then come crawling back to engineers to fix the mess.
There are no “R”'s (capital r) in strawberry.
R and r are the same letter. You can tell because a word that starts with r can be written with R at the start of the sentence
How many pounds of carbon did that answer produce?
300?
It’s that or Over 9000!!!
Ask the model to confirm the answer and it will correct itself, at least when I’ve tried that.
I’m sure there’s a mathematical or programmatic logic as to why, but seeing as I don’t need LLM’s to count letters or invent new types of pseudoscience, I’m not overly interested in it.
Regardless, I look forward to the bubble popping.
I don’t need LLM’s to count letters
If I can’t rely on a system to perform simple tasks I can easily validate, I’m not sure why I’d trust it to perform complex tasks I would struggle to verify.
Imagine a calculator that reported “1+1=3”. It seems silly to use such a machine to do long division.
That’s my point, I don’t use LLMs for those operations, and I’m aware of their faults, but that doesn’t mean they’re useless.
So yeah, I look forward to the AI bubble popping, but I’m still going to use LLMs for type of tasks they’re actually suited for.
I don’t think many people on Lemmy are under the the spell of AI hype, but plenty of people here are knowledgeable enough to know when, and when not, to leverage this useful, but dangerously overhyped and oversold, piece of technology.
A Math PhD will eventually make a simple arithmetic mistake if you ask them to do enough problems. That doesn’t invalidate more difficult proofs they have published in papers
A Math PhD will eventually make a simple arithmetic mistake if you ask them to do enough problems.
Which is why we don’t designate a single Math PhD as a definitive source for all mathematical wisdom.
That doesn’t invalidate more difficult proofs
If I’m handed a proof with a simple arithmetic mistake in the logic, that absolutely invalidates it
But you didn’t say that. You said you can’t trust something that makes basic mistakes. Humans make them all the time. You can’t trust any human?
Tell him I too laughed at him out loud like a lunatic.
The last 5% aren’t a nice bonus. They are everything. A 95% self driving car won’t do. Giving me random hallucinations when I try to look up important information won’t do either even if it just happens 1 out of 20 times. That one time could really screw me so I can’t trust it.
Currently AI companies have no idea how to get there yet they sell the promise of it. Next year, bro. Just one more datacenter, bro.
I get to ride in lots of different cars as part of my job, and some of the new ones display the current speed limit on the dash. It is incorrect quite regularly. My view is if you can’t trust it 100% of the time you can’t trust it at all and you might as well turn it off. I feel the same about a.i.
The ADAC in new cars varies so much in implementation. None of it can be trusted (like you said, the sign recognition is iften wrong) but as a backup reminder it can be great. eg: lane centring etc. If it feels like its seizing control from me it can be terrifying. eg: automatic braking out of the blue.
Couple of examples from just this last week: I was on a multi-lane road with a posted 60km/h speed limit, and the car was trying to tell the driver it was 40, and beeped at them whenever they went over it. Another one complained about crossing the centreline marking because we were going around parked cars and there was no choice. Thankfully the car didn’t seize control in those situations and just gave an audible warning, but if it had we’d have been in the pooh, especially that second one.
People tell me the hallucinations aren’t a big deal because people should fact check everything.
- People aren’t fact checking
- If you have to fact check every single thing you’re not saving any time over becoming familiar with whatever the real source of info is
My friend told me that one of her former colleagues, wicked smart dude, was talking to her about space. Then he went off about how there were pyramids on Mars. She was like, “oh … I’m quite caught up on this stuff and I haven’t heard of this info. Where can I find this info?” The guy apparently has been having super long chats with whatever LLMand thinks that they’re now diving into the “truth” now.
Sounds like this idiot:
Worse, since generating a whole bunch of potentially correct text is basically effortless now, you’ve got a new batch of idiots just “contributing” to discussions by leaving a regurgitated wall of text they possibly didn’t even read themselves.
So not only those are not fact checking, when you point that you didn’t ask for a LLM’s opinion, they’re like “what’s the problem? Is any of this wrong?” Because it’s entirely your job to check something they copy-pasted in 5 seconds.
So many posts on on social media are obviously AI generated and it immediately makes me disregard them but I’m worried about later stages when people make an effort to mask it. Prompt it to generate text without giveaways like dashes. Have intentional mistakes or a general lack of proper structure and punctuation in there and it will be incredibly hard to tell.
99% won’t do when the consequences of that last 1% are sever.
There’s more than one book on the subject, but all the cool kids were waving around their copies of The Black Swan at the end of 2008.
Seems like all the lessons we were supposed to learn about stacking risk behind financial abstractions and allowing business to self-regulate in the name of efficiency have been washed away, like tears in the rain.
99% won’t do when the consequences of that last 1% are sever.
As an example, your whole post is great but I can’t help but notice the one tiny typo that is like 1% of the letters. Heck, a lot of people probably didn’t even notice just like they don’t notice when AI returns the wrong results.
A multi billion dollar technical system should be far better than someone posting to the fediverse in their spare time, but it is far worse. Especially since those types of tiny errors will be fed back into future AI training and LLM design is not and never will be self correcting because it works with the data it has and it needs so much that it will always include scraped stuff.
It should, but it cant. OpenAI just admitted this in a recent paper. It’s baked in, the hallucinations. Chaos is baked in to the binary technology.
won’t do either even if it just happens 1 out of 20 times. That one time could really screw me so I can’t trust it.
20 is also the number of times you go to work per month.
Now imagine crashing your car once every month…
It made him look like an idiot, but the question is did it do that on purpose? or is it just worthless trash?
If the stranglehold billionaires have on the world begins to diminish, I’ll start to suspect it’s been on purpose. Until then, they’re just fucking idiots who made worthless trash.
No video?