• Singletona082@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Prove it.

    Or not. Once you invoke ‘there is no free will’ then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

    It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

      • Botzo@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        How about: there’s no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?

          • wizardbeard@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

            And who the hell argues the animals don’t have free will? They don’t have full sapience, but they absolutely have will.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

              I just dont find it a particularly useful concept.

              • CheeseNoodle@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                2 months ago

                I’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.