We’re cooked.

  • calliope@retrolemmy.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    4 hours ago

    Seems like very thinly veiled advertising for a new version of google’s ai image generation.

    If AI is getting this good at imitating the things that signal a photo is real, then guys: We are cooked.

    “We are cooked, fellow kids!”

    The author also pretty much says “all other AI was slop before this, right guys?”

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 hours ago

      Yeah, a more honest take would discuss the strengths & weakness of the model. Flux is still better at text than Nano Banana, for instance. There’s no “one model to rule them all,” as much as tech journalism seems to want to write like that.

  • bonenode@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 hours ago

    I feel the bus one is actually quite easy to spot as fake. There’s no one with head down looking at their phone.

    • Sinaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      Most of these images have really shitty resolution as well. Can’t they generate higher res stuff or would inconsistencies otherwise be more obvious?

      • Hackworth@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Directly, generating higher res stuff requires way more compute. But there are plenty of AI upscalers out there, some better, some worse. These are also built into Photoshop now. The difference between an AI image that is easy to spot and hard to spot is using good models. The difference between an AI image that is hard to spot and nearly impossible to spot is another 20 min of work in post.