• Ignotum@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    7
    ·
    edit-2
    3 days ago

    By that logic it would be unethical for an expert to give advice, or to even teach others to identify mushrooms, since they too are fallible and it could lead to death?

    Or saying it was unethical to invent cars because they can (and most certainly do) cause deaths.

    Almost everything would be unethical really, the world is chaotic, nothing is perfect, deaths happen, all we can do is work to reduce the risks

    • floquant@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      21
      ·
      3 days ago

      What makes an expert is the ability to say “this is unequivocally safe to eat, because I can positively identify it based on this and this feature”, as well as “it is not possible/I am not able to confidently identify this mushroom as safe”

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        3 days ago

        So an AI that can identify mushrooms and also tell the user if a mushroom is too similar to a different dangerous mushroom to be identified with a high enough certanity for it to be safe, would be ethical?

        Then how can anyone claim that no such system can ever be created? That makes no sense

          • Ignotum@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            3 days ago

            So experts cannot identify mushrooms at all by looking at it?

            They might turn it around and look at it from different angles, but then just make an AI that takes in multiple images from different angles, maybe have it ask for different angles if it cannot see everything it needs to see.

            And if the experts use other senses besides vision, like smell and touch, just make an AI that says “it might be X or Y, only way to tell them apart is through the smell, so i can’t be sure”

          • Ignotum@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            3
            ·
            3 days ago

            Yeah, over the top AI hype is annoying, and there are many valid criticisms to be had with regard to how AI is being trained and used (mainly generative AI),
            but all this absolutist anti-AI nonsense beats everything

    • manualoverride@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      3 days ago

      Now I don’t profess to remember the entire paper, but one section was certainly “Human factors” the difference between an expert is a human can place emphasis on the dangers above all else which an AI is often incapable of portraying, and the car will still have a human driver.

      The whole point was this was a very limited and narrow language model, with AI image recognition with the assumption that the thing the human was describing and picturing is a mushroom and it’s still fallible. Specifically a mushroom identification program is a really bad idea and absolutely unethical to create, a system that answers any question you ask it where you sort out the guardrails as you go… that’s dangerous.

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        3 days ago

        So the argument is that you tried an AI once and it didn’t do a thing, therefore it is impossible to create an AI that is able to do it?

        Let’s say we reach the point where we can scan and then simulate the entire brain of a mushroom expert, then you’d have an AI that would give the same responses as a human expert would, is it ethical now? (Ignoring the ethics of simulating a person like that)

        Simple classification problems are relatively trivial, just train an image classifier to take in a picture of a mushroom and have it predict the type, as well as whether or not the mushroom is similar to a dangerous one, and for good measure whether the picture is good enough to give reliable results. Train it based on feedback from experts and it should end up as reliable as the experts it was based on

        • manualoverride@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          2 days ago

          Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀

          Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.

          Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.

          • Ignotum@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            2 days ago

            Oh are you walking back the “it would be unethical” claim, and the claim that AI model cannot give nuanced responses like a human can?

            Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?

            If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing

            • manualoverride@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              2 days ago

              Can you see the irony of us having a nuanced debate which is leading to misunderstanding, because we are using a medium where detail and emphasis are difficult to achieve? 😀

              My assumption of my mushroom identification program was they it would become widely available, which would be unethical.

              In the hands of a trained Mycologist using it purely as a check on their established results. Possibly useful but easy to misuse.

              A Mycologist using the program to perform the identification first, which they would then check, also dangerous as human factors would lead to confirmation bias.

              AI systems inevitably lead to overconfident conclusions from people without the time or knowledge to know the potential risks.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              If we could make something that would reduce the absolute amount of yearly mushroom poisonings,

              You are begging the question. This is not known.

                • petrol_sniff_king@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  22 hours ago

                  You’re in here arguing with a dissertation you haven’t read because there might possibly be a chance we could maybe build an AI that could do this?

                  If we can’t, then you have nothing to add to this conversation.