• manualoverride@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    Well I did study for 5 years, code the AI myself and spent 4 months training it using screensaver processing on ~800 computers. Not like I downloaded an AI from the play store and declared it to be rubbish. 😀

    Even with reinforcement learning from human feedback, this is still a neural network where not every pathway leads to the correct outcome.

    Regardless of all the complexities people are still far more accepting of human error than AI error in extreme situations.

    • Ignotum@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      2 days ago

      Oh are you walking back the “it would be unethical” claim, and the claim that AI model cannot give nuanced responses like a human can?

      Sounds like you are now saying that a model can be made that is far better than any human expert, but since it can never be perfect and because people are far less forgiving when machines make mistakes, therefore what exactly?

      If we could make something that would reduce the absolute amount of yearly mushroom poisonings, then i would view that as an ethically good thing, not doing so would be like not making a medicine because it can give side effects, if the benefits outweigh the risks then i view it as a good thing

      • manualoverride@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Can you see the irony of us having a nuanced debate which is leading to misunderstanding, because we are using a medium where detail and emphasis are difficult to achieve? 😀

        My assumption of my mushroom identification program was they it would become widely available, which would be unethical.

        In the hands of a trained Mycologist using it purely as a check on their established results. Possibly useful but easy to misuse.

        A Mycologist using the program to perform the identification first, which they would then check, also dangerous as human factors would lead to confirmation bias.

        AI systems inevitably lead to overconfident conclusions from people without the time or knowledge to know the potential risks.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        If we could make something that would reduce the absolute amount of yearly mushroom poisonings,

        You are begging the question. This is not known.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            22 hours ago

            You’re in here arguing with a dissertation you haven’t read because there might possibly be a chance we could maybe build an AI that could do this?

            If we can’t, then you have nothing to add to this conversation.