• TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    ·
    11 hours ago

    Not that vehicles shipped after 2023 will be able to either.

    Waymo, with Lidar, and all the tech still uses remote human drivers to deal with harder situations.

    I don’t think we’re going to see FSD anytime soon. Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      61
      ·
      11 hours ago

      Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

      The funny part is that every CS grad student studying AI understood that perfectly well a decade ago.

      • qprimed@lemmy.ml
        link
        fedilink
        English
        arrow-up
        28
        ·
        10 hours ago

        exactly. I have at least a dozen projects that prove the fucking point.

        one can hope there will be an avalanche class action suits that crush this nazi and his swasticar.

        • Corkyskog@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 hours ago

          It would be funny if damages got astronomical because he promised everyone that they would be able to rent out their car as a taxi service for extra income.

    • ChicoSuave@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      10 hours ago

      Turns out the last few percent can’t just be ignored, 95% isn’t good enough, and 100% is really fucking hard.

      Techbros are learning this right now with AI too. Who could have guessed getting mostly there isn’t the same as being there.

    • BillyClark@piefed.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 hours ago

      95% isn’t good enough, and 100% is really fucking hard.

      This is a more extreme case of the extremely famous rule in programming called the 90-90 rule:

      “The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.”

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      12
      ·
      10 hours ago

      Humans aren’t better at driving in bad situations, they’re just better at ignoring most of the input and focusing on one thing, and more importantly, taking risks, which a computer isn’t going to be programmed to do. If a human navigates through a bad rainstorm, barely able to see anything, and makes it out fine, then they claim they’re better than a self-driving car which would shut down. Or more simply, a route that is very tight and risky, but a human will YOLO and make it through. They’re not better, they’re just lucky a lot.

      • surewhynotlem@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 hours ago

        Humans aren’t better at driving in bad situations,

        Gives reasons why humans are better at driving in bad situations

        Seriously?

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          6 hours ago

          Yes, if you read them. If you consider doing a Hail Mary in bad weather and managing to not hit anything better, then I guess they are better… at taking risks.

      • DomeGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        8 hours ago

        Humans are better in bad situations because humans drive like humans, and they expect all the other cars on the road to also drive like a human.

        The worst thing on the road is to be unpredictable, and an AI encountering a situation not in its training set is unpredictably unpredictable.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          7 hours ago

          You’re correct on AI. But I laughed at you saying humans are predictable. Seen any dashcam footage?

          • ikidd@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            7 hours ago

            I just imagine the dumbest thing someone can do in a situation and model for that.

            Icy roads? Well, the logical thing to do is make sudden moves at the last possible second without leaving a buffer. Stand on the throttle at every intersection, and start braking when you normally would in the middle of summer, of course.

            Honestly, you learn to predict unpredictability. Slight movements will tell you when someone is going to change lanes without a shoulder check or cross three lanes of traffic to make an exit that they could just have easily gone on to the next interchange without endangering themselves and others. Hell, I watch peoples eyes in their side mirrors look at me as they incorrectly judge how much space they have to insert themselves in front of me, when there’s a kilometer of space behind me they could use instead.