• altkey (he\him)@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    20 hours ago

    If we actually want to maintain our standard of living and reduce the population size, we may very well need AI automation utilities. They can keep scaling down in size and power consumption in the way that a real human can’t.

    Theoreticisizing LLM’s usefulness and resourcefulness doesn’t help you there. For now they are rather useless embaracingly inefficient resoucehogs existing purely because of the bubble. It’s a gamble at best, or a waste of resources and a degradation of human workforce at worst.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      18 hours ago

      AI is not just LLMs, and it’s already revolutionized biotechnical engineering through things like alpha fold. Like I said, “AI”, as in neural network algorithms of which LLMs are just one example, are literally solving entirely new classes of problems that we simply could not solve before.

      • altkey (he\him)@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        LLM is what usually sold as AI nowadays. Convential ML is boring and too normal, not as exciting as a thing that processes your words and gives some responses, almost as if it’s sentient. Nvidia couldn’t come to it’s current capitalization if we defaulted to useful models that can speed up technical process after some fine tuning by data scientists, like shaving off another 0.1% on Kaggle or IRL in a classification task. It usually causes big but still incremental changes. What is sold as AI and in what quality it fits into your original comment as a lifesaver is nothing short of reinvention of one’s workplace or completely replacing the worker. That’s hardly hapening anytime soon.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          13 hours ago

          LLM is what usually sold as AI nowadays. Convential ML is boring and too normal, not as exciting as a thing that processes your words and gives some responses, almost as if it’s sentient.

          To be fair, that’s because there are a lot of automation situations where having semantic understanding of a situation can be extremely helpful in guiding action over a ML model that is not semantically aware.

          The reason that AI video generation and out painting is so good for instance it that it’s analyzing a picture and dividing it into human concepts using language and then using language to guide how those things can realistically move and change, and then applying actual image generation. Stuff like Waymo’s self driving systems aren’t being run through LLMs but they are machine learning models operating on extremely similar principles to build a semantic understanding of the driving world.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          edit-2
          16 hours ago

          https://arstechnica.com/science/2024/10/protein-structure-and-design-software-gets-the-chemistry-nobel/

          I don’t have to dream, DeepMind literally won the Nobel prize last year. My best friend did his PhD in protein crystallography and it took him 6 years to predict the structure of a single protein underlying legionnaires disease. He’s now at MIT and just watched DeepMind predict hundreds of thousands of them in a year.

          If you vet your news sources by only listening to ones that are anti-AI then you’re going to miss the actual exciting advancements lurking beneath the oceans of tech bro hype.

          • Tollana1234567@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            49 minutes ago

            AI has not revolutionized biology research at all, its not complex enough, to come up with new experimentation methods, or manage the current ones, they maybe used to write AI slop papers thats about it.

          • supersquirrel@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            16 hours ago

            You need to take a step back and realize how warped your perception of reality has gotten.

            Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to grow.

            What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon. You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.

            The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.

            Don’t come at me like you are being logical here, at least admit that this is the cool scifi tech dystopia you wanted and have been obsessed with. This is the only way you get to this point of delusion since the rest of us see these technologies and go “huh, that looks like it has some use” whereas people like you have what is essentially a religious view towards AI and it is pathetic and offensive towards religions that actually have substance to their philosophy and beliefs from my perspective.

            The rich are using the gullibility of people like you to pump and dump entire economies you fool.

            Edit I am not sure why I wrote this like you might actually take a step back, you won’t, this message is really for everyone else to help emphasize how we are having the interests of the entire earth derailed by the advent of a shitty religion and its mindless disciples. The sooner the rest of us get on the same page, the sooner we can resist people like you and keep your rigid broken worldviews from destroying our futures.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              4
              ·
              16 hours ago

              You seem to be projecting about warped perspective.

              Sure LLMs and other forms of automation, artificial intelligence and brute forcing of scientific problems will continute to grow.

              That’s not brute forcing of a scientific problem, it’s literally a new type of algorithm that lets computers solve fuzzy pattern matching problems that they never could before.

              What you are talking about though is extrapolating from that to a massive shift that just isn’t on the horizon.

              I’m just very aware of the number of problems in society that fall into the category of fuzzy pattern matching / optimization. Quantum computing is also an exciting avenue for solving some of these problems though is incredibly difficult and complicated.

              You are delusional, you have read too many scifi books about AI and can’t get your brain off of that way of thinking being the future no matter how dystopian it is.

              This is just childish name calling.

              The value to AI just simply isn’t there, and that is before you even include the context of the ecological holocaust it is causing and enabling by getting countries all over the world to abandon critical carbon footprint reduction goals.

              Quite frankly, you’re conflating the tech bro hype around LLMs with AI more generally. The ecological footprint of Alpha Fold is tiny compared to previous methods of protein analysis that took labs of people years to discover each individual one. On top of the ecological footprint of all of those people and all of their resources for those years, they also have to use high powered equipment like centrifuges and x-ray machines. Alpha fold did that hundreds of thousands of times with some servers in a year.

              Don’t come at me like you are being logical here, at least admit that this is the cool scifi tech dystopia you wanted and have been obsessed with. This is the only way you get to this point of delusion since the rest of us see these technologies and go “huh, that looks like it has some use” whereas people like you have what is essentially a religious view towards AI and it is pathetic and offensive towards religions that actually have substance to their philosophy and beliefs.

              Again, more childish name calling. You don’t know me, don’t act like you do.

              • supersquirrel@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                edit-2
                16 hours ago

                I am treating you like a child because you refuse to use your brain.

                You gave me one obscure very early stage example that isn’t even connected to the overall rise in value of LLMs and other forms of AI that has created an economic bubble worse than the dotcom bubble. So you are claiming the next real AI revolution is justtttt around the corner with a totally new technology you swear?

                Maybe?

                What I do know for sure is you are far more interested in that maybe than you are in actually engaging with the existential real world problems we are facing right now…

                • masterspace@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  13 hours ago

                  I am treating you like a child because you refuse to use your brain.

                  No you’re doing so because you started doom scrolling before you had coffee and now you’re trying to justify your uncalled for rudeness.

                  You gave me one obscure

                  It literally won the nobel prize.

                  very early stage example

                  It is not early stage, predicting the structures of those proteins has already actively changed the course of biomedical science. This isn’t early stage research that need fleshing out, this is peer reviewed published research that has caused entire labs and teams to completely change what they’re doing and how.

                  that isn’t even connected to the overall rise in value of LLMs and other forms of AI

                  It is in that it uses the same underlying type of algorithms and is literally from the same team that developed the “T” in ChatGPT.

                  So you are claiming the next real AI revolution is justtttt around the corner with a totally new technology you swear?

                  I have not claimed that, I said that AI algorithms are likely to be part of our climate solutions and our ability to serve more people with less manual labour. They help to solve entirely new classes of problems and can do so far more efficiently than years of human labour.

                  Rage out about tech bubbles and hype bros if you want. Last time it was crypto, streaming before that, apps and mobile before that, social before that, the internet before that, etc etc. Hype bubbles come and go, sometimes the underlying technology is actually useful though.

                  • supersquirrel@sopuli.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    edit-2
                    1 hour ago

                    I have not claimed that, I said that AI algorithms are likely to be part of our climate solutions and our ability to serve more people with less manual labour. They help to solve entirely new classes of problems and can do so far more efficiently than years of human labour.

                    hahaha like AI will be a part of climate solutions are you serious right now?

                    Y’all are incapable of understanding expertise in your domain does not make you an expert in everything else, there is no way anyone in the industry you are speaking about will listen to climatologists and environmental scientists long enough to even begin to be helpful.

                    You keep talking about technology, when this is really a discussion about the catastrophic myopia of the tech industry of which you are making yourself a perfect example of.

                    https://www.goldmansachs.com/insights/articles/how-ai-is-transforming-data-centers-and-ramping-up-power-demand

                    To some biologists, that approach leaves the protein folding problem incomplete. From the earliest days of structural biology, researchers hoped to learn the rules of how an amino acid string folds into a protein. With AlphaFold2, most biologists agree that the structure prediction problem is solved. However, the protein folding problem is not. “Right now, you just have this black box that can somehow tell you the folded states, but not actually how you get there,” Zhong said.

                    “It’s not solved the way a scientist would solve it,” said Littman, the Brown University computer scientist.

                    This might sound like “semantic quibbling,” said George Rose, the biophysics professor emeritus at Johns Hopkins. “But of course it isn’t.” AlphaFold2 can recognize patterns in how a given amino acid sequence might fold up based on its analysis of hundreds of thousands of protein structures. But it can’t tell scientists anything about the protein folding process.

                    AlphaFold2’s success was founded on the availability of training data — hundreds of thousands of protein structures meticulously determined by the hands of patient experimentalists. While AlphaFold3 and related algorithms have shown some success in determining the structures of molecular compounds, their accuracy lags behind that of their single-protein predecessors. That’s in part because there is significantly less training data available.

                    The protein folding problem was “almost a perfect example for an AI solution,” Thornton said, because the algorithm could train on hundreds of thousands of protein structures collected in a uniform way. However, the Protein Data Bank may be an unusual example of organized data sharing in biology. Without high-quality data to train algorithms, they won’t make accurate predictions.

                    “We got lucky,” Jumper said. “We met the problem at the time it was ready to be solved.”

                    https://www.quantamagazine.org/how-ai-revolutionized-protein-science-but-didnt-end-it-20240626/

                    However, it should be noted that due to the intrinsic nature of AI, its success is not due to conceptual advancement and has not hitherto provided new intellectual interpretive models for the scientific community. If these considerations are placed in Kuhn’s framework of scientific revolution [68], AF release is a revolution without any paradigm change. Instead of “providing model problems and solutions for a community of practitioners” [68], it is a rather effective tool for solving a fundamental scientific problem.

                    https://pmc.ncbi.nlm.nih.gov/articles/PMC12109453/

                    This is because scientists working on AI (myself included) often work backwards. Instead of identifying a problem and then trying to find a solution, we start by assuming that AI will be the solution and then looking for problems to solve. But because it’s difficult to identify open scientific challenges that can be solved using AI, this “hammer in search of a nail” style of science means that researchers will often tackle problems which are suitable for using AI but which either have already been solved or don’t create new scientific knowledge.

                    ^ this is NOT the scientific method and it undermines the scientific integrity of the entire process

                    https://www.understandingai.org/p/i-got-fooled-by-ai-for-science-hypeheres

                    https://www.scilifelab.se/news/alphafold3-early-pain-points-overshadow-potential-promise/

                    https://www.reddit.com/r/biotech/comments/1d1096g/ai_for_drug_discovery/

                    https://www.reddit.com/r/Biochemistry/comments/1gui8n8/what_can_alphafold_teach_us_about_the_impact_of/

                    https://www.reddit.com/r/Biochemistry/comments/1j47wqy/thoughts_on_the_recent_veritasium_video_about/

                    https://www.reddit.com/r/labrats/comments/1b1l68p/people_are_overestimating_alphafold_and_its_a/

      • tjsauce@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Most people are cool with some AI when you show the small, non-plagarative stuff. It sucks that “AI” is such a big umbrella term, but the truth is that the majority of AI (measured in model size, usage, and output volume) is bad and should stop.

        Neural Network technology should not progress at the cost of our environment, short term or long term, and shouldn’t be used to dilute our collective culture and intelligence. Let’s not pretend that the dangers aren’t obvious and push for regulation.