• raldone01@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    17 hours ago

    Some of the SOTA models like gemini 3 pro are getting quite good at ballpark/estimations. I have fed it multiple complex formulas from my studies and some values. The end result is often quite close and similar in accuracy how I would do an estimation myself. (It is usually more accurate then my own ones.)

    Now I don’t argue there is any consciousness or magic going on. But I think the generalization that is going on is quite something! I have trained ai models for various robot control and computer vision tasks. Compared to older machine learning approaches transformers are very impressive, computationally accessible and easy to use. (In my limited experience)

    • Lfrith@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      I find it okay for writing programs since you can verify it to see if the output is correct.

      But, actual analysis not so much, since when verifying what comes out that its not completely reliable even for things it should be like numbers. Now numbers might be close, but still off

      Abstract stuff might be fine. But, its still not something to entirely trust on analysis because of errors. There’s a lot of double checking that needs to be going on.