• FreedomAdvocate
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    24 hours ago

    Where’s your proof?

    My evidence would be the absolutely massive and widespread adoption of all things AI. Yours would be……?

    • supersquirrel@sopuli.xyzOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      21 hours ago

      These things aren’t getting built, or if they’re getting built, it’s taking way, way longer than expected, which means that interest on that debt is piling up. The longer it takes, the less rational it becomes to buy further NVIDIA GPUs — after all, if data centers are taking anywhere from 18 months to three years to build, why would you be buying more of them? Where are you going to put them, Jensen?

      This also seriously brings into question the appetite that private credit and other financiers have for funding these projects, because much of the economic potential comes from the idea that these projects get built and have stable tenants. Furthermore, if the supply of AI compute is a bottleneck, this suggests that when (or if) that bottleneck is ever cleared, there will suddenly be a massive supply glut, lowering the overall value of the data centers in progress…which are, by the way, all filled with Blackwell GPUs, which will be two or three-years-old by the time the data centers are finally turned on.

      I also wonder whether the demand actually exists to make any of this worthwhile, or what people are actually paying for this compute.

      If we assume 3GW of IT load capacity was brought online in America, that should (theoretically) mean tens of billions of dollars of revenue thanks to the “insatiable demand for AI” — except nobody appears to be showing massive amounts of revenue from these data centers.

      https://www.wheresyoured.at/the-ai-industry-is-lying-to-you/

      Although there has been between $30 and $40 billion in enterprise investment into generative AI, a recent MIT report shows that 95 percent of organizations are seeing zero return.

      Just 5 percent of integrated artificial intelligence pilots “are extracting millions in value,” while the majority contribute no measurable impact to profits, the report found.

      https://thehill.com/policy/technology/5460663-generative-ai-zero-returns-businesses-mit-report/

      In this study, we show that, despite the ubiquity of AI-generated content, it does not perform well in search and answer engines:

      86% of articles ranking in Google Search are written by humans, and only 14% are generated using AI.

      82% of articles cited by ChatGPT & Perplexity are written by humans, and only 18% are generated using AI.

      When AI-generated articles do appear in Google Search, they tend to rank lower than human-written articles.

      https://graphite.io/five-percent/ai-content-in-search-and-llms

      They found that people perceived AI scientists more negatively than climate scientists or scientists in general, and that this negativity is driven by concern about AI scientists’ prudence – specifically, the perception that AI science is causing unintended consequences. The researchers also examined whether these negative perceptions might be a result of AI being so new and unknown, but found that public perceptions of AI science and scientists did not significantly improve from 2024 to 2025, even as AI became a more common presence in everyday life.

      https://www.asc.upenn.edu/news-events/news/ai-perceived-more-negatively-climate-science-or-science-general

      AI is also a threat towards luring people into psychosis because it pathologically confirms every impulse you have, so trying to argue everyone loves AI is going to backfire on you. Everyone loved cigarettes too when they were a new thing. People still love cigarettes, that only proves they are addictive.

      We find that sycophancy is both prevalent and harmful. Across 11 AI models, AI affirmed users’ actions 49% more often than humans on average, including in cases involving deception, illegality, or other harms. On posts from r/AmITheAsshole, AI systems affirm users in 51% of cases where human consensus does not (0%). In our human experiments, even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right. Yet despite distorting judgment, sycophantic models were trusted and preferred. All of these effects persisted when controlling for individual traits such as demographics and prior familiarity with AI; perceived response source; and response style. This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement.

      https://www.science.org/doi/10.1126/science.aec8352

      The study, published Thursday in the journal Science, tested 11 leading AI systems and found they all showed varying degrees of sycophancy — behavior that was overly agreeable and affirming. The problem is not just that they dispense inappropriate advice but that people trust and prefer AI more when the chatbots are justifying their convictions.

      https://www.rochesterfirst.com/science/ap-ai-is-giving-bad-advice-to-flatter-its-users-says-new-study-on-dangers-of-overly-agreeable-chatbots/

      The productivity gains aren’t there for AI, the business use cases aren’t actually there for AI, people are beginning to associate AI with “Slop” more and more as they realize how boring and poor quality content AI makes… and even in Google’s own search engine rankings AI written content barely makes it anywhere near the top because it scores so low for relevance and engagement to people.

      Oh yeah and again AI sends people into psychosis by putting people into echo chambers, so defending AI as likable isn’t even a rational defense for it in the same way arguing a Venus Fly Trap tastes good to a Fly to encourage the Fly to step on in is a poor argument.