As I’ve said elsewhere, I’m a little older. I hear a lot about AI. I’m just trying to figure out what’s “good” AI, what’s “bad” and if there’s even a difference. I do know that there’s the whole stealing content to train AI bs going on, but is it deeper? Is there such a thing as good AI? Just trying to learn so I can be better person

  • Pyrixas@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    A good AI is when you use it as a tool, which by all accounts, is what it is at the end of the day.

    A bad AI is when it is used to violate copyrights and be treated as a solve-all solution to problems where AI wouldn’t be needed in at all.

  • L7HM77@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    All AI is machine learning. Taking many inputs, running them through a series of tests, and using the result to make a decision. Most everything digital you interact with does this in some form or another.

    ‘AI,’ the marketing fad at the moment, means LLM machine lerning models, which are the ones that can respond in a human-like fashion, but these have limits around the math of linguistics, and there’s other parts built around verifying accuracy of the model output. This type is built on a newish method of training called the transformer model.

    None of these are inherently good or evil. Just another part of a toolset someone could use to solve a problem with a computer. There are social issues around how they are used, and who is making/using them. Tech companies are aggressively pushing their new toy out to market, and there aren’t any consumer protection agencies prepared for this. At enterprise scale, many data centers need to be built, and strain will be added to the electrical generation companies / power generation facilities to feed these data centers.

    My personal gripe with the whole situation is how local governements are handling this. Taxes are being waived for new constructions, electric supply companies are raising residential rates, all the would-be checks and balances are being paid off so this can all be rammed through. Even my local union has sunk us into it, we’re onboarding apprentices faster than we can train them, we’ll have several hundred more members just to maintain these things. Everyone’s being promised “money.”

    All of this is done without any guarantee, no one can say how money will flow from untaxed data centers into the city funds, and all of this demand could evaporate overnight. Companies are being sold a black box that they plug into the wall, and it generates revenue. Everyone’s running skeleton crews, because “AI will eliminate the human workforce,” but all the business reports show that AI isn’t doing much, just that fewer workers are being pushed harder.

    I’m mostly pissed at my union, who will not share any info with us, but have admitted to just seeing short-term dollar signs while knowing that if this works out in favor of the tech companies, this is going fuck up the local economy, and put major pressure against the organized workforce across all trades and sectors.

  • agent_nycto@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    Here’s a fairly well researched and entertaining video about ai and some of the downsides.

    some more news

    Long story short, in my opinion, there’s isn’t a good AI. The things it sets out to do, it does poorly, and there’s ethical, bodily, environmental, and mental concerns with it.

    • BeardededSquidward@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      3 days ago

      AI has shown to be more detrimental than beneficial. If you overlook the over dependence on it, especially among children. The light pollution of their centers and crowding out water and electricity in small towns. The cost increase of electronics for just a plan to buy it all out. It just does things so poorly that the most modest of competent people out perform it.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Oh my. This is a huge can of worms—especially on Lemmy. There’s a lot of anti-AI hate on this platform. Almost to the point of it being a religion.

    For reference, when people say, “AI” they’re usually talking about Large Language Models (LLMs) and other forms of generative AI (e.g. diffusion models that make images). Having said that, “AI” is an enormous topic of which LLMs are a small, but increasingly popular part.

    Furthermore, when people here on Lemmy say, “AI” they’re normally talking about “Big AI” which consists of:

    • OpenAI (ChatGPT)
    • Microsoft (Copilot)
    • Anthropic (Claude)
    • Meta (Whatsapp, Facebook, Instagram, Llama models, and more)
    • Google (Gemini and shittons of other things people don’t see and often don’t even have names people outside of Google would recognize)
    • Amazon (because they’re hosting the data centers that power a lot of the other players and also do AI stuff on their own)

    Is AI inherently bad or evil? No. It’s just the latest way of giving instructions to a computer. Considering that all computer programs are literally just instructions, an AI model is just a really fancy and often expensive way of performing the same function. Albeit with a lot more breadth and flexibility. Note that I didn’t say “depth”, haha.

    The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware. There’s so much demand for their services that they’re having to build their own—often dirty, fossil fuel—power plants just to power it all.

    A lot of the talk around data centers is based on myths. For example, generating an image with AI doesn’t use a liter of water. A study came out that no one actually read (beyond the summary) that stated that a really long conversation with an LLM could in theory use up half a liter of water, assuming the data center was powered by a fossil fuel power plant that was using water for cooling (as in, the heat dissipation required 0.5 liters of water from the cooling pond next to the power plant, not potable/drinking water).

    LLMs do use up a lot of power though! People often assume this is from training the AIs (which I’ll get to in a moment) because everyone “knows” it’s a long, involved process that can take months (even with a $50 billion data center specifically made for AI). However, it’s actually all the people and businesses using AI that uses up all that energy. The biggest, most power-hungry step is “inference” which is the point where the LLM tries to figure out what you just asked of it.

    The important point here is that AI is actually being used.* There’s real demand for it! It’s not just fools asking ChatGPT for strange pizza recipes. It’s mostly businesses using it for things like writing and checking code or investigating server logs for malicious activity or any number of very businessy IT things.

    The demand for AI services is so great that they can’t build data centers fast enough. Big AI, specifically is having trouble keeping responses within satisfactory time windows. The business models are still developing but they’re actually not charging enough to make up for their spending in a lot of cases. Specifically, OpenAI and Microsoft are losing money like crazy, trying to compete.

    I ran out of time… I’ll reply again about the copyright situation, training costs, and open weight (aka open source) models in a bit…

    • Melonpoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware.

      Are you forgetting the IP theft, unconsensual data harvesting, increase in price for consumer electronics, reduction in critical thinking, and the vast amount of public money and space given to these companies that could’ve been used for something more beneficial to society?

      I used to like tech but the tech industry ruined it.

    • agent_nycto@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      4 days ago

      There’s a lot of anti-AI hate on this platform. Almost to the point of it being a religion.

      There’s a lot of justified hate, outside of Lemmy as well. The irony of saying it’s like a religion when there’s people worshipping their AI out there is notable.

      No. It’s just the latest way of giving instructions to a computer.

      While that’s sort of true, it’s obfuscating what actually happens. You’re technically just giving instructions to a computer, but it’s not like a software program on your personal computer. You’re sending a message out to a very large computer to do a very complicated large program, while a lot of other people are doing so.

      The “bad” or “evil” part of AI is mostly due to the large players (aka “Big AI”) spending literally over $1 trillion so far on data centers and hardware.

      There’s more than that. There’s the ethical concerns of making pornography of people without their consent, especially minors. There’s art theft. There’s people losing jobs. There’s the environmental issues. There’s the mental issues. There’s the problems with people trying to get jobs. There’s the drop in reading comprehension. There’s the people being driven to kill or kill themselves over it. There’s people falling in love with their AI and avoiding other people for it. There’s the noise. The water usage. The electrical pull. The Ponzi scheme funding.

      You’re trying to preemptively say that these complaints are only about the big AI, but these are inherent for all of them.

      There’s so much demand for their services that they’re having to build their own—often dirty, fossil fuel—power plants just to power it all.

      Source? People are already having to pay more for electricity. Tahoe is about to not have any electricity because of the AI center.

      Also those sus dash marks.

      the heat dissipation required 0.5 liters of water from the cooling pond next to the power plant, not potable/drinking water).

      Ok, but where do you think that water was acquired to fill that pond? It’s from local sources. Closed loop systems aren’t actually great for the environment, either. You remember the water cycle? Where water evaporates, turns into clouds, turns into rain, then dries up and repeats itself? Well, there’s only a specific amount of water on the planet, and only some of it is usable by humans. Data centers and AI centers using closed loop systems take a huge chunk of water out of that water cycle. With global warming in the mix, we’re starting to run out. Oh, and data centers and AI centers don’t disclose how much water they are taking out of the local system, so we can only guess, but the best estimate is summed up as “a fuck load”.

      However, it’s actually all the people and businesses using AI that uses up all that energy. The biggest, most power-hungry step is “inference” which is the point where the LLM tries to figure out what you just asked of it.

      Saying “it doesn’t use power unless you use it” isn’t really an argument against it’s power usage. And saying it uses more power after it’s started is worse.

      The important point here is that AI is actually being used.* There’s real demand for it!

      That demand, though, isn’t profitable. That’s why companies have been upping their rates and the building of AI centers have been stalling.

      The demand for AI services is so great that they can’t build data centers fast enough.

      That’s not why people have been trying to build a lot of data centers. There’s a lot of speculative investing going on, and there’s a lot of people trying to get onto the ground floor. So these people are dumping a crapton of money into it, trying to get ahead of everyone else.

      This isn’t coming from some bandwagon, or anti progress/tech sentiment. Ai is just bad.