• merc@sh.itjust.works
    link
    fedilink
    arrow-up
    51
    ·
    1 day ago

    The difference is that LLMs generate something that is designed to blend in. It’s supposed to convince someone that it was made by a human.

    So, a “vibe constructed” house would probably look like a real house to someone who didn’t know much about houses. But, the pipe from the sink might just go into a space between the walls. The electrical system would be a random mess of lines that would short out as soon as it was connected to the grid. The doors might look right at first, but when you tried to open one you’d see that the hinges were installed in a way that opening it was impossible.

    • takeda@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      One good way to realize how unreliable LLM is was to ask something that you are expert of.

      Manny people say that they aren’t experts in anything so that won’t work well for them, but here is one way.

      Reddit made a deal with Google and OpenAI to send their data to them. I don’t know if the deal with OpenAI is still valid as ChatGPT responses were a bit outdated, but Gemini definitively is getting the latest data.

      If you have a Reddit account ask it about yourself, or someone you know well IRL. This works really will of it is an old account.

      First responses might sound right but as you talk you see how it invents things and does so in convincing matter. You don’t correct it, just ask more about it and how it will provide additional information backing it up, including links to Reddit posts (which don’t mention any of it).

      It is quite amazing. LLM is being sold to us as it can automate jobs but it real purpose is to be used on social media to manipulate our options. The bullshitting (oh, sorry, I mean “hallucination”) is a feature.

      https://mashable.com/article/anonymous-researchers-used-ai-on-reddit-debate-forum

    • Rose@slrpnk.net
      link
      fedilink
      arrow-up
      22
      ·
      23 hours ago

      A while ago, there was a a YouTube video of people laughing at AI generated floorplans.

      Because of course there was a company that tried to make an AI floorplan generator without a shred of thinking. They posted the “good” ones on their website, and even they had obvious weird details like completely misproportionate rooms, having ten bathrooms in a small house, and just straight up missing doors everywhere.

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        23 hours ago

        Yeah, the fact that they still tried to sell or use so much AI slop that still had major flaws should have been a clear sign that you should never trust marketers because they’ll still try to push obvious garbage as if it is amazing. Same thing with those early coke AI ads that were just a series of unrelated scenes that look ok until you look any closer, but someone greenlighted it despite all that.

        They were phoning it in so hard they pushed what were technical demos at best as final products.

        Though from my perspective, I already had inverse trust based on apparent ad budget even before AI slop showed up because enshitification and apathy about actual quality were rampant long before AI slop was a thing.

    • gaiussabinus@lemmy.world
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      21 hours ago

      You might be shocked to hear this but all of those issues happen with no AI involved at all. Every time a bastard is born you should slap an engineer.

    • ILikeBoobies@lemmy.ca
      link
      fedilink
      arrow-up
      5
      ·
      24 hours ago

      Seen a home where the dish washer pipe just went into the wall and ended. Mixed copper and aluminum wire. The way doors opened was random.

      The vibe builder might not be much different than what we have.

    • tempest@lemmy.ca
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      23 hours ago

      The thing is while LLMs do make mistakes when you have something you can drive conformance with they are actually pretty good (eventually)

      For electrical or plumbing we can simulate those pretty well and a LLM can iterate until it gets a reasonable output since it can check its work against a simulator.

      You could likely live in a house for a bit before you stumbled into something stupid. You could live your entire life and not notice if you never tore out the walls.

      However if you went to change a light switch you might discover it used 3 different kinds of screws and the screw terminals are mislabeled.